Nov 5 15:47:09.183249 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 13:45:21 -00 2025 Nov 5 15:47:09.183274 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:47:09.183283 kernel: BIOS-provided physical RAM map: Nov 5 15:47:09.183289 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Nov 5 15:47:09.183295 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Nov 5 15:47:09.183304 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 5 15:47:09.183311 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Nov 5 15:47:09.183317 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Nov 5 15:47:09.183324 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 5 15:47:09.183330 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 5 15:47:09.183336 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 5 15:47:09.183342 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 5 15:47:09.183349 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Nov 5 15:47:09.183357 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 5 15:47:09.183365 kernel: NX (Execute Disable) protection: active Nov 5 15:47:09.183371 kernel: APIC: Static calls initialized Nov 5 15:47:09.183378 kernel: SMBIOS 2.8 present. Nov 5 15:47:09.183385 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Nov 5 15:47:09.183394 kernel: DMI: Memory slots populated: 1/1 Nov 5 15:47:09.183400 kernel: Hypervisor detected: KVM Nov 5 15:47:09.183407 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Nov 5 15:47:09.183413 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 5 15:47:09.183420 kernel: kvm-clock: using sched offset of 5976587085 cycles Nov 5 15:47:09.183427 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 5 15:47:09.183434 kernel: tsc: Detected 2000.002 MHz processor Nov 5 15:47:09.183442 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 5 15:47:09.183449 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 5 15:47:09.183458 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Nov 5 15:47:09.183465 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 5 15:47:09.183473 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 5 15:47:09.183480 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Nov 5 15:47:09.184138 kernel: Using GB pages for direct mapping Nov 5 15:47:09.184148 kernel: ACPI: Early table checksum verification disabled Nov 5 15:47:09.184155 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Nov 5 15:47:09.184166 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:47:09.184173 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:47:09.184180 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:47:09.184187 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 5 15:47:09.184194 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:47:09.184202 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:47:09.184214 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:47:09.184221 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:47:09.184229 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Nov 5 15:47:09.184236 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Nov 5 15:47:09.184244 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 5 15:47:09.184253 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Nov 5 15:47:09.184260 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Nov 5 15:47:09.184268 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Nov 5 15:47:09.184275 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Nov 5 15:47:09.184282 kernel: No NUMA configuration found Nov 5 15:47:09.184289 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Nov 5 15:47:09.184297 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Nov 5 15:47:09.184304 kernel: Zone ranges: Nov 5 15:47:09.184314 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 5 15:47:09.184321 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 5 15:47:09.184328 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Nov 5 15:47:09.184335 kernel: Device empty Nov 5 15:47:09.184342 kernel: Movable zone start for each node Nov 5 15:47:09.184349 kernel: Early memory node ranges Nov 5 15:47:09.184357 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 5 15:47:09.184364 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Nov 5 15:47:09.184373 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Nov 5 15:47:09.184477 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Nov 5 15:47:09.184510 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 5 15:47:09.184519 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 5 15:47:09.184526 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Nov 5 15:47:09.184534 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 5 15:47:09.184541 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 5 15:47:09.184552 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 5 15:47:09.184559 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 5 15:47:09.184566 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 5 15:47:09.184574 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 5 15:47:09.184581 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 5 15:47:09.184588 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 5 15:47:09.184595 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 5 15:47:09.184605 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 5 15:47:09.184612 kernel: TSC deadline timer available Nov 5 15:47:09.184619 kernel: CPU topo: Max. logical packages: 1 Nov 5 15:47:09.184627 kernel: CPU topo: Max. logical dies: 1 Nov 5 15:47:09.184634 kernel: CPU topo: Max. dies per package: 1 Nov 5 15:47:09.184641 kernel: CPU topo: Max. threads per core: 1 Nov 5 15:47:09.184648 kernel: CPU topo: Num. cores per package: 2 Nov 5 15:47:09.184655 kernel: CPU topo: Num. threads per package: 2 Nov 5 15:47:09.184665 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 5 15:47:09.184672 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 5 15:47:09.184679 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 5 15:47:09.184686 kernel: kvm-guest: setup PV sched yield Nov 5 15:47:09.184694 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 5 15:47:09.184701 kernel: Booting paravirtualized kernel on KVM Nov 5 15:47:09.184708 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 5 15:47:09.184718 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 5 15:47:09.184725 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 5 15:47:09.184733 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 5 15:47:09.184740 kernel: pcpu-alloc: [0] 0 1 Nov 5 15:47:09.184747 kernel: kvm-guest: PV spinlocks enabled Nov 5 15:47:09.184754 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 5 15:47:09.184763 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:47:09.184773 kernel: random: crng init done Nov 5 15:47:09.184780 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 5 15:47:09.184787 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 5 15:47:09.184795 kernel: Fallback order for Node 0: 0 Nov 5 15:47:09.184802 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Nov 5 15:47:09.184809 kernel: Policy zone: Normal Nov 5 15:47:09.184816 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 15:47:09.184826 kernel: software IO TLB: area num 2. Nov 5 15:47:09.184833 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 5 15:47:09.184840 kernel: ftrace: allocating 40092 entries in 157 pages Nov 5 15:47:09.184847 kernel: ftrace: allocated 157 pages with 5 groups Nov 5 15:47:09.184855 kernel: Dynamic Preempt: voluntary Nov 5 15:47:09.184862 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 15:47:09.184870 kernel: rcu: RCU event tracing is enabled. Nov 5 15:47:09.184879 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 5 15:47:09.184887 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 15:47:09.184894 kernel: Rude variant of Tasks RCU enabled. Nov 5 15:47:09.184901 kernel: Tracing variant of Tasks RCU enabled. Nov 5 15:47:09.184909 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 15:47:09.184916 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 5 15:47:09.184923 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 15:47:09.184940 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 15:47:09.184947 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 15:47:09.184955 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 5 15:47:09.184965 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 5 15:47:09.184972 kernel: Console: colour VGA+ 80x25 Nov 5 15:47:09.184980 kernel: printk: legacy console [tty0] enabled Nov 5 15:47:09.184987 kernel: printk: legacy console [ttyS0] enabled Nov 5 15:47:09.184995 kernel: ACPI: Core revision 20240827 Nov 5 15:47:09.185005 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 5 15:47:09.185012 kernel: APIC: Switch to symmetric I/O mode setup Nov 5 15:47:09.185020 kernel: x2apic enabled Nov 5 15:47:09.185027 kernel: APIC: Switched APIC routing to: physical x2apic Nov 5 15:47:09.185035 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 5 15:47:09.185043 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 5 15:47:09.185052 kernel: kvm-guest: setup PV IPIs Nov 5 15:47:09.185060 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 5 15:47:09.185067 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x1cd42fed8cc, max_idle_ns: 440795202126 ns Nov 5 15:47:09.185075 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000002) Nov 5 15:47:09.185083 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 5 15:47:09.185090 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 5 15:47:09.185098 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 5 15:47:09.185107 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 5 15:47:09.185115 kernel: Spectre V2 : Mitigation: Retpolines Nov 5 15:47:09.185122 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 5 15:47:09.185130 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 5 15:47:09.185137 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 5 15:47:09.185145 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 5 15:47:09.185153 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 5 15:47:09.185163 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 5 15:47:09.185171 kernel: active return thunk: srso_alias_return_thunk Nov 5 15:47:09.185178 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 5 15:47:09.185186 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Nov 5 15:47:09.185193 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Nov 5 15:47:09.185201 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 5 15:47:09.185208 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 5 15:47:09.185218 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 5 15:47:09.185414 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 5 15:47:09.185422 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 5 15:47:09.185429 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Nov 5 15:47:09.185437 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Nov 5 15:47:09.185444 kernel: Freeing SMP alternatives memory: 32K Nov 5 15:47:09.185452 kernel: pid_max: default: 32768 minimum: 301 Nov 5 15:47:09.185462 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 15:47:09.185469 kernel: landlock: Up and running. Nov 5 15:47:09.185476 kernel: SELinux: Initializing. Nov 5 15:47:09.185503 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 15:47:09.185511 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 15:47:09.185519 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Nov 5 15:47:09.185526 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 5 15:47:09.185536 kernel: ... version: 0 Nov 5 15:47:09.185544 kernel: ... bit width: 48 Nov 5 15:47:09.185551 kernel: ... generic registers: 6 Nov 5 15:47:09.185559 kernel: ... value mask: 0000ffffffffffff Nov 5 15:47:09.185566 kernel: ... max period: 00007fffffffffff Nov 5 15:47:09.185574 kernel: ... fixed-purpose events: 0 Nov 5 15:47:09.185581 kernel: ... event mask: 000000000000003f Nov 5 15:47:09.185591 kernel: signal: max sigframe size: 3376 Nov 5 15:47:09.185598 kernel: rcu: Hierarchical SRCU implementation. Nov 5 15:47:09.185606 kernel: rcu: Max phase no-delay instances is 400. Nov 5 15:47:09.185613 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 5 15:47:09.185621 kernel: smp: Bringing up secondary CPUs ... Nov 5 15:47:09.185628 kernel: smpboot: x86: Booting SMP configuration: Nov 5 15:47:09.185636 kernel: .... node #0, CPUs: #1 Nov 5 15:47:09.185645 kernel: smp: Brought up 1 node, 2 CPUs Nov 5 15:47:09.185653 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Nov 5 15:47:09.185661 kernel: Memory: 3984336K/4193772K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 204760K reserved, 0K cma-reserved) Nov 5 15:47:09.185669 kernel: devtmpfs: initialized Nov 5 15:47:09.185676 kernel: x86/mm: Memory block size: 128MB Nov 5 15:47:09.185684 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 15:47:09.185691 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 5 15:47:09.185701 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 15:47:09.185708 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 15:47:09.185716 kernel: audit: initializing netlink subsys (disabled) Nov 5 15:47:09.185724 kernel: audit: type=2000 audit(1762357626.400:1): state=initialized audit_enabled=0 res=1 Nov 5 15:47:09.185731 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 15:47:09.185739 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 5 15:47:09.185746 kernel: cpuidle: using governor menu Nov 5 15:47:09.185756 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 15:47:09.185763 kernel: dca service started, version 1.12.1 Nov 5 15:47:09.185771 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 5 15:47:09.185778 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 5 15:47:09.187306 kernel: PCI: Using configuration type 1 for base access Nov 5 15:47:09.187320 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 5 15:47:09.187328 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 5 15:47:09.187340 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 5 15:47:09.187348 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 15:47:09.187355 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 15:47:09.187363 kernel: ACPI: Added _OSI(Module Device) Nov 5 15:47:09.187370 kernel: ACPI: Added _OSI(Processor Device) Nov 5 15:47:09.187378 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 15:47:09.187386 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 5 15:47:09.187393 kernel: ACPI: Interpreter enabled Nov 5 15:47:09.187403 kernel: ACPI: PM: (supports S0 S3 S5) Nov 5 15:47:09.187411 kernel: ACPI: Using IOAPIC for interrupt routing Nov 5 15:47:09.187418 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 5 15:47:09.187426 kernel: PCI: Using E820 reservations for host bridge windows Nov 5 15:47:09.187434 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 5 15:47:09.187441 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 5 15:47:09.187698 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 5 15:47:09.187894 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 5 15:47:09.188078 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 5 15:47:09.188088 kernel: PCI host bridge to bus 0000:00 Nov 5 15:47:09.188267 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 5 15:47:09.188432 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 5 15:47:09.188621 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 5 15:47:09.188785 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Nov 5 15:47:09.188948 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 5 15:47:09.189109 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Nov 5 15:47:09.189270 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 5 15:47:09.189467 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 5 15:47:09.189681 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 5 15:47:09.189859 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Nov 5 15:47:09.190034 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Nov 5 15:47:09.190213 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Nov 5 15:47:09.190386 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 5 15:47:09.190598 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Nov 5 15:47:09.190777 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Nov 5 15:47:09.190951 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Nov 5 15:47:09.191125 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Nov 5 15:47:09.191311 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 5 15:47:09.191502 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Nov 5 15:47:09.191689 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Nov 5 15:47:09.191865 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Nov 5 15:47:09.192040 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Nov 5 15:47:09.192410 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 5 15:47:09.192843 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 5 15:47:09.193044 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 5 15:47:09.193398 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Nov 5 15:47:09.193608 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Nov 5 15:47:09.193796 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 5 15:47:09.193971 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 5 15:47:09.193982 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 5 15:47:09.193994 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 5 15:47:09.194002 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 5 15:47:09.194009 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 5 15:47:09.194017 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 5 15:47:09.194025 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 5 15:47:09.194033 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 5 15:47:09.194040 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 5 15:47:09.194050 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 5 15:47:09.194058 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 5 15:47:09.194066 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 5 15:47:09.194073 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 5 15:47:09.194081 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 5 15:47:09.194089 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 5 15:47:09.194096 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 5 15:47:09.194107 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 5 15:47:09.194114 kernel: iommu: Default domain type: Translated Nov 5 15:47:09.194122 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 5 15:47:09.194130 kernel: PCI: Using ACPI for IRQ routing Nov 5 15:47:09.194137 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 5 15:47:09.194145 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Nov 5 15:47:09.194153 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Nov 5 15:47:09.194808 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 5 15:47:09.195057 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 5 15:47:09.195306 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 5 15:47:09.195342 kernel: vgaarb: loaded Nov 5 15:47:09.195351 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 5 15:47:09.195359 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 5 15:47:09.195388 kernel: clocksource: Switched to clocksource kvm-clock Nov 5 15:47:09.195400 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 15:47:09.195408 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 15:47:09.195416 kernel: pnp: PnP ACPI init Nov 5 15:47:09.195723 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 5 15:47:09.195737 kernel: pnp: PnP ACPI: found 5 devices Nov 5 15:47:09.195745 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 5 15:47:09.195757 kernel: NET: Registered PF_INET protocol family Nov 5 15:47:09.195788 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 5 15:47:09.195796 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 5 15:47:09.195823 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 15:47:09.195831 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 5 15:47:09.195838 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 5 15:47:09.195846 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 5 15:47:09.195857 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 15:47:09.195865 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 15:47:09.195872 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 15:47:09.195880 kernel: NET: Registered PF_XDP protocol family Nov 5 15:47:09.196250 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 5 15:47:09.196420 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 5 15:47:09.196666 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 5 15:47:09.196878 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Nov 5 15:47:09.197042 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 5 15:47:09.197203 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Nov 5 15:47:09.197213 kernel: PCI: CLS 0 bytes, default 64 Nov 5 15:47:09.197221 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 5 15:47:09.197229 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Nov 5 15:47:09.197237 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x1cd42fed8cc, max_idle_ns: 440795202126 ns Nov 5 15:47:09.197396 kernel: Initialise system trusted keyrings Nov 5 15:47:09.197404 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 5 15:47:09.197411 kernel: Key type asymmetric registered Nov 5 15:47:09.197419 kernel: Asymmetric key parser 'x509' registered Nov 5 15:47:09.197427 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 5 15:47:09.197434 kernel: io scheduler mq-deadline registered Nov 5 15:47:09.197442 kernel: io scheduler kyber registered Nov 5 15:47:09.197452 kernel: io scheduler bfq registered Nov 5 15:47:09.197459 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 5 15:47:09.197467 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 5 15:47:09.197475 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 5 15:47:09.197483 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 15:47:09.197513 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 5 15:47:09.197521 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 5 15:47:09.197531 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 5 15:47:09.197539 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 5 15:47:09.197546 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 5 15:47:09.197732 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 5 15:47:09.197901 kernel: rtc_cmos 00:03: registered as rtc0 Nov 5 15:47:09.198069 kernel: rtc_cmos 00:03: setting system clock to 2025-11-05T15:47:07 UTC (1762357627) Nov 5 15:47:09.198241 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 5 15:47:09.198251 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 5 15:47:09.198259 kernel: NET: Registered PF_INET6 protocol family Nov 5 15:47:09.198267 kernel: Segment Routing with IPv6 Nov 5 15:47:09.198274 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 15:47:09.198282 kernel: NET: Registered PF_PACKET protocol family Nov 5 15:47:09.198290 kernel: Key type dns_resolver registered Nov 5 15:47:09.198300 kernel: IPI shorthand broadcast: enabled Nov 5 15:47:09.198308 kernel: sched_clock: Marking stable (1222004525, 352077706)->(1706968113, -132885882) Nov 5 15:47:09.198316 kernel: registered taskstats version 1 Nov 5 15:47:09.198323 kernel: Loading compiled-in X.509 certificates Nov 5 15:47:09.198331 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 9f02cc8d588ce542f03b0da66dde47a90a145382' Nov 5 15:47:09.198339 kernel: Demotion targets for Node 0: null Nov 5 15:47:09.198346 kernel: Key type .fscrypt registered Nov 5 15:47:09.198356 kernel: Key type fscrypt-provisioning registered Nov 5 15:47:09.198363 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 15:47:09.198371 kernel: ima: Allocated hash algorithm: sha1 Nov 5 15:47:09.198379 kernel: ima: No architecture policies found Nov 5 15:47:09.198386 kernel: clk: Disabling unused clocks Nov 5 15:47:09.198394 kernel: Freeing unused kernel image (initmem) memory: 15964K Nov 5 15:47:09.198401 kernel: Write protecting the kernel read-only data: 40960k Nov 5 15:47:09.198411 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 5 15:47:09.198419 kernel: Run /init as init process Nov 5 15:47:09.198427 kernel: with arguments: Nov 5 15:47:09.198434 kernel: /init Nov 5 15:47:09.198442 kernel: with environment: Nov 5 15:47:09.198449 kernel: HOME=/ Nov 5 15:47:09.198472 kernel: TERM=linux Nov 5 15:47:09.198482 kernel: SCSI subsystem initialized Nov 5 15:47:09.198506 kernel: libata version 3.00 loaded. Nov 5 15:47:09.198716 kernel: ahci 0000:00:1f.2: version 3.0 Nov 5 15:47:09.198729 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 5 15:47:09.198910 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 5 15:47:09.199132 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 5 15:47:09.199317 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 5 15:47:09.200155 kernel: scsi host0: ahci Nov 5 15:47:09.200360 kernel: scsi host1: ahci Nov 5 15:47:09.200577 kernel: scsi host2: ahci Nov 5 15:47:09.200774 kernel: scsi host3: ahci Nov 5 15:47:09.200966 kernel: scsi host4: ahci Nov 5 15:47:09.201161 kernel: scsi host5: ahci Nov 5 15:47:09.201173 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 24 lpm-pol 1 Nov 5 15:47:09.201181 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 24 lpm-pol 1 Nov 5 15:47:09.201189 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 24 lpm-pol 1 Nov 5 15:47:09.201197 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 24 lpm-pol 1 Nov 5 15:47:09.201208 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 24 lpm-pol 1 Nov 5 15:47:09.201219 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 24 lpm-pol 1 Nov 5 15:47:09.201227 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 5 15:47:09.201235 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 5 15:47:09.201242 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 5 15:47:09.201250 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 5 15:47:09.201258 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 5 15:47:09.201266 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 5 15:47:09.201457 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Nov 5 15:47:09.201673 kernel: scsi host6: Virtio SCSI HBA Nov 5 15:47:09.201883 kernel: scsi 6:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 5 15:47:09.202086 kernel: sd 6:0:0:0: Power-on or device reset occurred Nov 5 15:47:09.202282 kernel: sd 6:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Nov 5 15:47:09.202476 kernel: sd 6:0:0:0: [sda] Write Protect is off Nov 5 15:47:09.202704 kernel: sd 6:0:0:0: [sda] Mode Sense: 63 00 00 08 Nov 5 15:47:09.202914 kernel: sd 6:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 5 15:47:09.202926 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 5 15:47:09.202934 kernel: GPT:25804799 != 167739391 Nov 5 15:47:09.202942 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 5 15:47:09.202950 kernel: GPT:25804799 != 167739391 Nov 5 15:47:09.202958 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 5 15:47:09.202969 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 5 15:47:09.203190 kernel: sd 6:0:0:0: [sda] Attached SCSI disk Nov 5 15:47:09.203203 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 15:47:09.203212 kernel: device-mapper: uevent: version 1.0.3 Nov 5 15:47:09.203220 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 15:47:09.203228 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 5 15:47:09.203239 kernel: raid6: avx2x4 gen() 34185 MB/s Nov 5 15:47:09.203249 kernel: raid6: avx2x2 gen() 34335 MB/s Nov 5 15:47:09.203257 kernel: raid6: avx2x1 gen() 23655 MB/s Nov 5 15:47:09.203265 kernel: raid6: using algorithm avx2x2 gen() 34335 MB/s Nov 5 15:47:09.203273 kernel: raid6: .... xor() 29879 MB/s, rmw enabled Nov 5 15:47:09.203283 kernel: raid6: using avx2x2 recovery algorithm Nov 5 15:47:09.203291 kernel: xor: automatically using best checksumming function avx Nov 5 15:47:09.203299 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 15:47:09.203307 kernel: BTRFS: device fsid a4c7be9c-39f6-471d-8a4c-d50144c6bf01 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (167) Nov 5 15:47:09.203315 kernel: BTRFS info (device dm-0): first mount of filesystem a4c7be9c-39f6-471d-8a4c-d50144c6bf01 Nov 5 15:47:09.203323 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:47:09.203332 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 5 15:47:09.203342 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 15:47:09.203350 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 15:47:09.203358 kernel: loop: module loaded Nov 5 15:47:09.203366 kernel: loop0: detected capacity change from 0 to 100120 Nov 5 15:47:09.203374 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 15:47:09.203382 systemd[1]: Successfully made /usr/ read-only. Nov 5 15:47:09.203395 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 15:47:09.203404 systemd[1]: Detected virtualization kvm. Nov 5 15:47:09.203412 systemd[1]: Detected architecture x86-64. Nov 5 15:47:09.203420 systemd[1]: Running in initrd. Nov 5 15:47:09.203428 systemd[1]: No hostname configured, using default hostname. Nov 5 15:47:09.203436 systemd[1]: Hostname set to . Nov 5 15:47:09.203447 systemd[1]: Initializing machine ID from random generator. Nov 5 15:47:09.203455 systemd[1]: Queued start job for default target initrd.target. Nov 5 15:47:09.203463 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 15:47:09.203471 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:47:09.203480 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:47:09.203509 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 15:47:09.203518 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 15:47:09.203530 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 15:47:09.203539 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 15:47:09.203547 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:47:09.203555 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:47:09.203564 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 15:47:09.203574 systemd[1]: Reached target paths.target - Path Units. Nov 5 15:47:09.203583 systemd[1]: Reached target slices.target - Slice Units. Nov 5 15:47:09.203591 systemd[1]: Reached target swap.target - Swaps. Nov 5 15:47:09.203599 systemd[1]: Reached target timers.target - Timer Units. Nov 5 15:47:09.203607 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 15:47:09.203615 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 15:47:09.203624 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 15:47:09.203634 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 15:47:09.203642 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:47:09.203650 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 15:47:09.203658 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:47:09.203667 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 15:47:09.203675 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 5 15:47:09.203683 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 15:47:09.203694 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 15:47:09.203702 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 15:47:09.203711 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 15:47:09.203719 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 15:47:09.203727 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 15:47:09.203736 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 15:47:09.203744 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:47:09.203755 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 15:47:09.203763 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:47:09.203771 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 15:47:09.203804 systemd-journald[303]: Collecting audit messages is disabled. Nov 5 15:47:09.203824 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 15:47:09.203833 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 15:47:09.203844 kernel: Bridge firewalling registered Nov 5 15:47:09.203852 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 15:47:09.203861 systemd-journald[303]: Journal started Nov 5 15:47:09.203877 systemd-journald[303]: Runtime Journal (/run/log/journal/9cb97fe0351d4f34bd95456791b66dca) is 8M, max 78.2M, 70.2M free. Nov 5 15:47:09.199547 systemd-modules-load[304]: Inserted module 'br_netfilter' Nov 5 15:47:09.212897 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 15:47:09.215319 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:47:09.220646 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:47:09.225027 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 15:47:09.230887 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 15:47:09.247794 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:47:09.251914 systemd-tmpfiles[321]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 15:47:09.331953 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:47:09.333026 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:47:09.334680 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:47:09.339476 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 15:47:09.342638 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 15:47:09.364916 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 15:47:09.368651 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 15:47:09.396546 dracut-cmdline[344]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:47:09.402390 systemd-resolved[331]: Positive Trust Anchors: Nov 5 15:47:09.402403 systemd-resolved[331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 15:47:09.402408 systemd-resolved[331]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 15:47:09.402435 systemd-resolved[331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 15:47:09.435194 systemd-resolved[331]: Defaulting to hostname 'linux'. Nov 5 15:47:09.437191 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 15:47:09.439427 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:47:09.499521 kernel: Loading iSCSI transport class v2.0-870. Nov 5 15:47:09.515519 kernel: iscsi: registered transport (tcp) Nov 5 15:47:09.538787 kernel: iscsi: registered transport (qla4xxx) Nov 5 15:47:09.538817 kernel: QLogic iSCSI HBA Driver Nov 5 15:47:09.567478 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 15:47:09.583249 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:47:09.586590 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 15:47:09.630011 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 15:47:09.632253 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 15:47:09.635512 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 15:47:09.668314 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 15:47:09.671646 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:47:09.698478 systemd-udevd[584]: Using default interface naming scheme 'v257'. Nov 5 15:47:09.711992 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:47:09.716613 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 15:47:09.740337 dracut-pre-trigger[648]: rd.md=0: removing MD RAID activation Nov 5 15:47:09.757632 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 15:47:09.766634 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 15:47:09.776613 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 15:47:09.782618 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 15:47:09.814227 systemd-networkd[707]: lo: Link UP Nov 5 15:47:09.814237 systemd-networkd[707]: lo: Gained carrier Nov 5 15:47:09.814771 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 15:47:09.816218 systemd[1]: Reached target network.target - Network. Nov 5 15:47:09.882314 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:47:09.885630 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 15:47:09.988658 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 5 15:47:10.009977 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 5 15:47:10.022034 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 5 15:47:10.027293 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 15:47:10.034742 kernel: cryptd: max_cpu_qlen set to 1000 Nov 5 15:47:10.052205 disk-uuid[756]: Primary Header is updated. Nov 5 15:47:10.052205 disk-uuid[756]: Secondary Entries is updated. Nov 5 15:47:10.052205 disk-uuid[756]: Secondary Header is updated. Nov 5 15:47:10.061455 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 5 15:47:10.222883 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:47:10.223005 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:47:10.234507 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 5 15:47:10.227187 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:47:10.235645 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:47:10.247512 kernel: AES CTR mode by8 optimization enabled Nov 5 15:47:10.325717 systemd-networkd[707]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:47:10.325733 systemd-networkd[707]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 15:47:10.327685 systemd-networkd[707]: eth0: Link UP Nov 5 15:47:10.327951 systemd-networkd[707]: eth0: Gained carrier Nov 5 15:47:10.327962 systemd-networkd[707]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:47:10.454445 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:47:10.464586 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 15:47:10.467092 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 15:47:10.468289 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:47:10.470008 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 15:47:10.474621 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 15:47:10.497069 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 15:47:11.120562 systemd-networkd[707]: eth0: DHCPv4 address 172.239.60.160/24, gateway 172.239.60.1 acquired from 23.40.197.110 Nov 5 15:47:11.254587 disk-uuid[757]: Warning: The kernel is still using the old partition table. Nov 5 15:47:11.254587 disk-uuid[757]: The new table will be used at the next reboot or after you Nov 5 15:47:11.254587 disk-uuid[757]: run partprobe(8) or kpartx(8) Nov 5 15:47:11.254587 disk-uuid[757]: The operation has completed successfully. Nov 5 15:47:11.260702 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 15:47:11.260839 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 15:47:11.262797 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 15:47:11.307511 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (853) Nov 5 15:47:11.312784 kernel: BTRFS info (device sda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:47:11.312827 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:47:11.319828 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 5 15:47:11.319850 kernel: BTRFS info (device sda6): turning on async discard Nov 5 15:47:11.323742 kernel: BTRFS info (device sda6): enabling free space tree Nov 5 15:47:11.332508 kernel: BTRFS info (device sda6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:47:11.332906 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 15:47:11.335192 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 15:47:11.474776 ignition[872]: Ignition 2.22.0 Nov 5 15:47:11.474793 ignition[872]: Stage: fetch-offline Nov 5 15:47:11.474835 ignition[872]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:47:11.474847 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 5 15:47:11.478379 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 15:47:11.474922 ignition[872]: parsed url from cmdline: "" Nov 5 15:47:11.474927 ignition[872]: no config URL provided Nov 5 15:47:11.474932 ignition[872]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 15:47:11.481457 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 5 15:47:11.474942 ignition[872]: no config at "/usr/lib/ignition/user.ign" Nov 5 15:47:11.474948 ignition[872]: failed to fetch config: resource requires networking Nov 5 15:47:11.475085 ignition[872]: Ignition finished successfully Nov 5 15:47:11.512357 ignition[879]: Ignition 2.22.0 Nov 5 15:47:11.512373 ignition[879]: Stage: fetch Nov 5 15:47:11.512477 ignition[879]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:47:11.512508 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 5 15:47:11.512576 ignition[879]: parsed url from cmdline: "" Nov 5 15:47:11.512581 ignition[879]: no config URL provided Nov 5 15:47:11.512586 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 15:47:11.512594 ignition[879]: no config at "/usr/lib/ignition/user.ign" Nov 5 15:47:11.512615 ignition[879]: PUT http://169.254.169.254/v1/token: attempt #1 Nov 5 15:47:11.595071 ignition[879]: PUT result: OK Nov 5 15:47:11.595344 ignition[879]: GET http://169.254.169.254/v1/user-data: attempt #1 Nov 5 15:47:11.703419 ignition[879]: GET result: OK Nov 5 15:47:11.704562 ignition[879]: parsing config with SHA512: b64df61205feff9275c58466b60172d2bc2a9dfe6c3ed5d28470174905ff109863e91ee62b24d574e1b79bf2912f75455b730bc96db7f5f2840e8ce83019e604 Nov 5 15:47:11.712123 unknown[879]: fetched base config from "system" Nov 5 15:47:11.712136 unknown[879]: fetched base config from "system" Nov 5 15:47:11.712405 ignition[879]: fetch: fetch complete Nov 5 15:47:11.712143 unknown[879]: fetched user config from "akamai" Nov 5 15:47:11.712412 ignition[879]: fetch: fetch passed Nov 5 15:47:11.712452 ignition[879]: Ignition finished successfully Nov 5 15:47:11.716897 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 5 15:47:11.720383 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 15:47:11.754692 ignition[886]: Ignition 2.22.0 Nov 5 15:47:11.755522 ignition[886]: Stage: kargs Nov 5 15:47:11.755696 ignition[886]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:47:11.755708 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 5 15:47:11.757675 ignition[886]: kargs: kargs passed Nov 5 15:47:11.757745 ignition[886]: Ignition finished successfully Nov 5 15:47:11.760418 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 15:47:11.763789 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 15:47:11.806999 ignition[892]: Ignition 2.22.0 Nov 5 15:47:11.807012 ignition[892]: Stage: disks Nov 5 15:47:11.807350 ignition[892]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:47:11.807368 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 5 15:47:11.808451 ignition[892]: disks: disks passed Nov 5 15:47:11.810311 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 15:47:11.808518 ignition[892]: Ignition finished successfully Nov 5 15:47:11.811998 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 15:47:11.813461 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 15:47:11.837664 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 15:47:11.839250 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 15:47:11.841183 systemd[1]: Reached target basic.target - Basic System. Nov 5 15:47:11.845630 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 15:47:11.884799 systemd-fsck[901]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Nov 5 15:47:11.887358 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 15:47:11.890625 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 15:47:12.008532 kernel: EXT4-fs (sda9): mounted filesystem f3db699e-c9e0-4f6b-8c2b-aa40a78cd116 r/w with ordered data mode. Quota mode: none. Nov 5 15:47:12.009688 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 15:47:12.011073 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 15:47:12.014024 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 15:47:12.017574 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 15:47:12.019462 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 5 15:47:12.020750 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 15:47:12.020777 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 15:47:12.029803 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 15:47:12.032996 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 15:47:12.041745 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (909) Nov 5 15:47:12.041774 kernel: BTRFS info (device sda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:47:12.046129 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:47:12.057519 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 5 15:47:12.057542 kernel: BTRFS info (device sda6): turning on async discard Nov 5 15:47:12.057555 kernel: BTRFS info (device sda6): enabling free space tree Nov 5 15:47:12.061858 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 15:47:12.106405 initrd-setup-root[933]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 15:47:12.113541 initrd-setup-root[940]: cut: /sysroot/etc/group: No such file or directory Nov 5 15:47:12.119337 initrd-setup-root[947]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 15:47:12.124937 initrd-setup-root[954]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 15:47:12.235521 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 15:47:12.238589 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 15:47:12.240720 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 15:47:12.260519 kernel: BTRFS info (device sda6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:47:12.269467 systemd-networkd[707]: eth0: Gained IPv6LL Nov 5 15:47:12.278327 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 15:47:12.288364 ignition[1024]: INFO : Ignition 2.22.0 Nov 5 15:47:12.288364 ignition[1024]: INFO : Stage: mount Nov 5 15:47:12.291410 ignition[1024]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:47:12.291410 ignition[1024]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 5 15:47:12.291410 ignition[1024]: INFO : mount: mount passed Nov 5 15:47:12.291410 ignition[1024]: INFO : Ignition finished successfully Nov 5 15:47:12.290858 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 15:47:12.294580 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 15:47:12.299620 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 15:47:12.310548 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 15:47:12.332527 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1034) Nov 5 15:47:12.339564 kernel: BTRFS info (device sda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:47:12.339605 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:47:12.346860 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 5 15:47:12.346900 kernel: BTRFS info (device sda6): turning on async discard Nov 5 15:47:12.346916 kernel: BTRFS info (device sda6): enabling free space tree Nov 5 15:47:12.351314 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 15:47:12.385734 ignition[1050]: INFO : Ignition 2.22.0 Nov 5 15:47:12.385734 ignition[1050]: INFO : Stage: files Nov 5 15:47:12.387756 ignition[1050]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:47:12.387756 ignition[1050]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 5 15:47:12.387756 ignition[1050]: DEBUG : files: compiled without relabeling support, skipping Nov 5 15:47:12.391075 ignition[1050]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 15:47:12.391075 ignition[1050]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 15:47:12.393359 ignition[1050]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 15:47:12.393359 ignition[1050]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 15:47:12.395572 ignition[1050]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 15:47:12.393626 unknown[1050]: wrote ssh authorized keys file for user: core Nov 5 15:47:12.397387 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 5 15:47:12.397387 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 5 15:47:12.768466 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 15:47:13.058655 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 5 15:47:13.058655 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 5 15:47:13.061478 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 15:47:13.061478 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 15:47:13.061478 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 15:47:13.061478 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 15:47:13.061478 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 15:47:13.061478 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 15:47:13.061478 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 15:47:13.061478 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 15:47:13.061478 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 15:47:13.071273 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 5 15:47:13.071273 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 5 15:47:13.071273 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 5 15:47:13.071273 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 5 15:47:13.475523 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 5 15:47:14.248934 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 5 15:47:14.248934 ignition[1050]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 5 15:47:14.251473 ignition[1050]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 15:47:14.252791 ignition[1050]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 15:47:14.252791 ignition[1050]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 5 15:47:14.252791 ignition[1050]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 5 15:47:14.252791 ignition[1050]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 5 15:47:14.252791 ignition[1050]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 5 15:47:14.282260 ignition[1050]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 5 15:47:14.282260 ignition[1050]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Nov 5 15:47:14.282260 ignition[1050]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 15:47:14.282260 ignition[1050]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 15:47:14.282260 ignition[1050]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 15:47:14.282260 ignition[1050]: INFO : files: files passed Nov 5 15:47:14.282260 ignition[1050]: INFO : Ignition finished successfully Nov 5 15:47:14.256477 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 15:47:14.280621 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 15:47:14.290647 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 15:47:14.294771 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 15:47:14.294908 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 15:47:14.317557 initrd-setup-root-after-ignition[1083]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:47:14.318899 initrd-setup-root-after-ignition[1087]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:47:14.320477 initrd-setup-root-after-ignition[1083]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:47:14.321602 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 15:47:14.323142 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 15:47:14.325440 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 15:47:14.374869 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 15:47:14.375003 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 15:47:14.376946 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 15:47:14.378291 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 15:47:14.380171 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 15:47:14.381057 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 15:47:14.424937 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 15:47:14.427176 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 15:47:14.447755 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 15:47:14.448850 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:47:14.450626 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:47:14.451603 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 15:47:14.453180 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 15:47:14.453331 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 15:47:14.455436 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 15:47:14.456541 systemd[1]: Stopped target basic.target - Basic System. Nov 5 15:47:14.458067 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 15:47:14.459611 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 15:47:14.461076 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 15:47:14.463313 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 15:47:14.464935 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 15:47:14.466621 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 15:47:14.468373 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 15:47:14.470414 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 15:47:14.471992 systemd[1]: Stopped target swap.target - Swaps. Nov 5 15:47:14.473732 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 15:47:14.473873 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 15:47:14.475598 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:47:14.476657 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:47:14.478099 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 15:47:14.480601 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:47:14.481585 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 15:47:14.481721 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 15:47:14.485118 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 15:47:14.485230 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 15:47:14.486261 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 15:47:14.486397 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 15:47:14.489572 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 15:47:14.490934 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 15:47:14.491601 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:47:14.496010 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 15:47:14.497640 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 15:47:14.497847 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:47:14.498748 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 15:47:14.498889 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:47:14.503003 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 15:47:14.503120 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 15:47:14.514431 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 15:47:14.515073 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 15:47:14.527509 ignition[1107]: INFO : Ignition 2.22.0 Nov 5 15:47:14.527509 ignition[1107]: INFO : Stage: umount Nov 5 15:47:14.527509 ignition[1107]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:47:14.527509 ignition[1107]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 5 15:47:14.527509 ignition[1107]: INFO : umount: umount passed Nov 5 15:47:14.527509 ignition[1107]: INFO : Ignition finished successfully Nov 5 15:47:14.528987 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 15:47:14.529344 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 15:47:14.531924 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 15:47:14.531978 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 15:47:14.532715 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 15:47:14.532778 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 15:47:14.534309 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 5 15:47:14.534369 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 5 15:47:14.536280 systemd[1]: Stopped target network.target - Network. Nov 5 15:47:14.536994 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 15:47:14.537054 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 15:47:14.538391 systemd[1]: Stopped target paths.target - Path Units. Nov 5 15:47:14.539125 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 15:47:14.563147 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:47:14.564030 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 15:47:14.565367 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 15:47:14.566778 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 15:47:14.566825 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 15:47:14.568275 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 15:47:14.568318 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 15:47:14.569740 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 15:47:14.569803 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 15:47:14.571210 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 15:47:14.571261 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 15:47:14.572751 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 15:47:14.574435 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 15:47:14.579404 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 15:47:14.580307 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 15:47:14.581402 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 15:47:14.582464 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 15:47:14.582627 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 15:47:14.588639 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 15:47:14.588786 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 15:47:14.593213 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 15:47:14.594292 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 15:47:14.594340 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:47:14.595881 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 15:47:14.595938 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 15:47:14.598273 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 15:47:14.600647 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 15:47:14.600732 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 15:47:14.602223 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 15:47:14.602295 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:47:14.605828 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 15:47:14.605892 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 15:47:14.607278 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:47:14.624041 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 15:47:14.624226 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:47:14.629479 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 15:47:14.629644 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 15:47:14.632934 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 15:47:14.632980 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:47:14.634577 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 15:47:14.634636 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 15:47:14.635935 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 15:47:14.635987 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 15:47:14.637348 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 15:47:14.637399 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 15:47:14.640641 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 15:47:14.641764 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 15:47:14.641818 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:47:14.643710 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 15:47:14.643760 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:47:14.647574 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:47:14.647624 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:47:14.649111 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 15:47:14.655582 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 15:47:14.662227 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 15:47:14.662376 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 15:47:14.664235 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 15:47:14.666382 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 15:47:14.700458 systemd[1]: Switching root. Nov 5 15:47:14.734272 systemd-journald[303]: Journal stopped Nov 5 15:47:15.942847 systemd-journald[303]: Received SIGTERM from PID 1 (systemd). Nov 5 15:47:15.942881 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 15:47:15.942894 kernel: SELinux: policy capability open_perms=1 Nov 5 15:47:15.942904 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 15:47:15.942916 kernel: SELinux: policy capability always_check_network=0 Nov 5 15:47:15.942925 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 15:47:15.942936 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 15:47:15.942945 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 15:47:15.942955 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 15:47:15.942964 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 15:47:15.942976 kernel: audit: type=1403 audit(1762357634.871:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 15:47:15.942986 systemd[1]: Successfully loaded SELinux policy in 74.402ms. Nov 5 15:47:15.942997 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.135ms. Nov 5 15:47:15.943010 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 15:47:15.943024 systemd[1]: Detected virtualization kvm. Nov 5 15:47:15.943034 systemd[1]: Detected architecture x86-64. Nov 5 15:47:15.943045 systemd[1]: Detected first boot. Nov 5 15:47:15.943056 systemd[1]: Initializing machine ID from random generator. Nov 5 15:47:15.943066 zram_generator::config[1151]: No configuration found. Nov 5 15:47:15.943079 kernel: Guest personality initialized and is inactive Nov 5 15:47:15.943089 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 5 15:47:15.943099 kernel: Initialized host personality Nov 5 15:47:15.943109 kernel: NET: Registered PF_VSOCK protocol family Nov 5 15:47:15.943120 systemd[1]: Populated /etc with preset unit settings. Nov 5 15:47:15.943130 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 15:47:15.943143 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 15:47:15.943153 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 15:47:15.943165 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 15:47:15.943176 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 15:47:15.943186 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 15:47:15.943197 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 15:47:15.943210 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 15:47:15.943220 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 15:47:15.943231 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 15:47:15.943242 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 15:47:15.943252 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:47:15.943263 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:47:15.943274 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 15:47:15.943286 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 15:47:15.943297 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 15:47:15.943310 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 15:47:15.943321 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 5 15:47:15.943332 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:47:15.943344 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:47:15.943357 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 15:47:15.943368 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 15:47:15.943378 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 15:47:15.943389 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 15:47:15.943400 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:47:15.943410 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 15:47:15.943423 systemd[1]: Reached target slices.target - Slice Units. Nov 5 15:47:15.943434 systemd[1]: Reached target swap.target - Swaps. Nov 5 15:47:15.943444 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 15:47:15.943455 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 15:47:15.943466 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 15:47:15.943477 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:47:15.943503 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 15:47:15.943514 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:47:15.943525 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 15:47:15.943536 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 15:47:15.943547 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 15:47:15.943560 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 15:47:15.943571 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:47:15.943582 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 15:47:15.943593 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 15:47:15.943604 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 15:47:15.943615 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 15:47:15.943628 systemd[1]: Reached target machines.target - Containers. Nov 5 15:47:15.943639 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 15:47:15.943650 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:47:15.943661 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 15:47:15.943672 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 15:47:15.943683 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:47:15.943693 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 15:47:15.943706 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:47:15.943717 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 15:47:15.943728 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:47:15.943739 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 15:47:15.943750 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 15:47:15.943761 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 15:47:15.943772 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 15:47:15.943784 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 15:47:15.943796 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:47:15.943806 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 15:47:15.943817 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 15:47:15.943830 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 15:47:15.943841 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 15:47:15.943854 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 15:47:15.943864 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 15:47:15.943895 systemd-journald[1239]: Collecting audit messages is disabled. Nov 5 15:47:15.943915 kernel: fuse: init (API version 7.41) Nov 5 15:47:15.943929 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:47:15.943940 systemd-journald[1239]: Journal started Nov 5 15:47:15.943960 systemd-journald[1239]: Runtime Journal (/run/log/journal/63a70b3a02ff48eea93eb67da0705887) is 8M, max 78.2M, 70.2M free. Nov 5 15:47:15.950204 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 15:47:15.550108 systemd[1]: Queued start job for default target multi-user.target. Nov 5 15:47:15.576927 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 5 15:47:15.577532 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 15:47:15.955554 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 15:47:15.962564 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 15:47:15.964147 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 15:47:15.965571 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 15:47:16.000078 kernel: ACPI: bus type drm_connector registered Nov 5 15:47:15.993742 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 15:47:15.994618 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 15:47:15.997545 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 15:47:15.998649 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:47:16.001001 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 15:47:16.002603 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 15:47:16.004780 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:47:16.005022 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:47:16.006936 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 15:47:16.007822 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 15:47:16.009479 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:47:16.009868 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:47:16.011319 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 15:47:16.011775 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 15:47:16.012884 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:47:16.013372 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:47:16.014644 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 15:47:16.015993 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:47:16.018226 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 15:47:16.019958 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 15:47:16.036906 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 15:47:16.038395 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 5 15:47:16.040590 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 15:47:16.047622 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 15:47:16.048426 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 15:47:16.048526 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 15:47:16.052372 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 15:47:16.056242 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:47:16.062678 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 15:47:16.066624 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 15:47:16.067444 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:47:16.070949 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 15:47:16.072741 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:47:16.078324 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:47:16.081444 systemd-journald[1239]: Time spent on flushing to /var/log/journal/63a70b3a02ff48eea93eb67da0705887 is 68.633ms for 979 entries. Nov 5 15:47:16.081444 systemd-journald[1239]: System Journal (/var/log/journal/63a70b3a02ff48eea93eb67da0705887) is 8M, max 588.1M, 580.1M free. Nov 5 15:47:16.171277 systemd-journald[1239]: Received client request to flush runtime journal. Nov 5 15:47:16.171337 kernel: loop1: detected capacity change from 0 to 8 Nov 5 15:47:16.171364 kernel: loop2: detected capacity change from 0 to 224512 Nov 5 15:47:16.083891 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 15:47:16.089049 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 15:47:16.093582 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 15:47:16.095687 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 15:47:16.097426 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 15:47:16.101274 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 15:47:16.106660 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 15:47:16.122665 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:47:16.153968 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 15:47:16.157361 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:47:16.164892 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 15:47:16.167690 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 15:47:16.169948 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 15:47:16.182811 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 15:47:16.194619 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 15:47:16.217515 kernel: loop3: detected capacity change from 0 to 110984 Nov 5 15:47:16.220461 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Nov 5 15:47:16.221456 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Nov 5 15:47:16.234388 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:47:16.259508 kernel: loop4: detected capacity change from 0 to 128048 Nov 5 15:47:16.261692 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 15:47:16.297514 kernel: loop5: detected capacity change from 0 to 8 Nov 5 15:47:16.307517 kernel: loop6: detected capacity change from 0 to 224512 Nov 5 15:47:16.333085 kernel: loop7: detected capacity change from 0 to 110984 Nov 5 15:47:16.355514 kernel: loop1: detected capacity change from 0 to 128048 Nov 5 15:47:16.357116 systemd-resolved[1286]: Positive Trust Anchors: Nov 5 15:47:16.357580 systemd-resolved[1286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 15:47:16.357641 systemd-resolved[1286]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 15:47:16.357825 systemd-resolved[1286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 15:47:16.367936 systemd-resolved[1286]: Defaulting to hostname 'linux'. Nov 5 15:47:16.368205 (sd-merge)[1307]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-akamai.raw'. Nov 5 15:47:16.370212 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 15:47:16.372684 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:47:16.375317 (sd-merge)[1307]: Merged extensions into '/usr'. Nov 5 15:47:16.379696 systemd[1]: Reload requested from client PID 1276 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 15:47:16.379711 systemd[1]: Reloading... Nov 5 15:47:16.455541 zram_generator::config[1338]: No configuration found. Nov 5 15:47:16.655854 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 15:47:16.656576 systemd[1]: Reloading finished in 276 ms. Nov 5 15:47:16.686599 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 15:47:16.687921 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 15:47:16.699006 systemd[1]: Starting ensure-sysext.service... Nov 5 15:47:16.702618 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 15:47:16.709933 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:47:16.732775 systemd[1]: Reload requested from client PID 1378 ('systemctl') (unit ensure-sysext.service)... Nov 5 15:47:16.732795 systemd[1]: Reloading... Nov 5 15:47:16.732898 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 15:47:16.732938 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 15:47:16.733526 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 15:47:16.733786 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 15:47:16.735528 systemd-tmpfiles[1379]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 15:47:16.737624 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. Nov 5 15:47:16.737699 systemd-tmpfiles[1379]: ACLs are not supported, ignoring. Nov 5 15:47:16.749220 systemd-tmpfiles[1379]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 15:47:16.749237 systemd-tmpfiles[1379]: Skipping /boot Nov 5 15:47:16.770682 systemd-tmpfiles[1379]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 15:47:16.770700 systemd-tmpfiles[1379]: Skipping /boot Nov 5 15:47:16.771836 systemd-udevd[1380]: Using default interface naming scheme 'v257'. Nov 5 15:47:16.845579 zram_generator::config[1418]: No configuration found. Nov 5 15:47:16.999528 kernel: mousedev: PS/2 mouse device common for all mice Nov 5 15:47:17.055340 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 5 15:47:17.055822 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 5 15:47:17.062516 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 5 15:47:17.113512 kernel: EDAC MC: Ver: 3.0.0 Nov 5 15:47:17.122524 kernel: ACPI: button: Power Button [PWRF] Nov 5 15:47:17.187817 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 5 15:47:17.187957 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 5 15:47:17.188810 systemd[1]: Reloading finished in 455 ms. Nov 5 15:47:17.198697 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:47:17.207114 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:47:17.305074 systemd[1]: Finished ensure-sysext.service. Nov 5 15:47:17.311789 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:47:17.313605 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 15:47:17.319137 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 15:47:17.320678 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:47:17.324067 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 15:47:17.327054 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:47:17.330060 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 15:47:17.333875 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:47:17.338675 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:47:17.340090 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:47:17.348675 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 15:47:17.350050 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:47:17.352905 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 15:47:17.362939 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 15:47:17.367524 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 5 15:47:17.372720 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 15:47:17.378446 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:47:17.380542 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:47:17.381900 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 15:47:17.382117 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 15:47:17.384147 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:47:17.384926 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:47:17.390260 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:47:17.391757 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:47:17.396138 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:47:17.440264 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:47:17.442025 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:47:17.443723 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:47:17.447762 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 15:47:17.455116 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 15:47:17.499826 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 15:47:17.523853 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 15:47:17.524888 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 15:47:17.535772 augenrules[1547]: No rules Nov 5 15:47:17.538341 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 15:47:17.539279 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 15:47:17.545204 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 5 15:47:17.546836 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 15:47:17.585563 systemd-networkd[1515]: lo: Link UP Nov 5 15:47:17.585573 systemd-networkd[1515]: lo: Gained carrier Nov 5 15:47:17.588184 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 15:47:17.589118 systemd[1]: Reached target network.target - Network. Nov 5 15:47:17.589529 systemd-networkd[1515]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:47:17.589538 systemd-networkd[1515]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 15:47:17.593407 systemd-networkd[1515]: eth0: Link UP Nov 5 15:47:17.595445 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 15:47:17.597690 systemd-networkd[1515]: eth0: Gained carrier Nov 5 15:47:17.597726 systemd-networkd[1515]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:47:17.598040 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 15:47:17.716834 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:47:17.745963 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 15:47:17.903769 ldconfig[1502]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 15:47:17.907093 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 15:47:17.909545 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 15:47:17.930906 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 15:47:17.931988 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 15:47:17.932972 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 15:47:17.933819 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 15:47:17.934730 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 5 15:47:17.935650 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 15:47:17.936673 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 15:47:17.937512 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 15:47:17.938255 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 15:47:17.938289 systemd[1]: Reached target paths.target - Path Units. Nov 5 15:47:17.938971 systemd[1]: Reached target timers.target - Timer Units. Nov 5 15:47:17.940844 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 15:47:17.943387 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 15:47:17.946155 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 15:47:17.947056 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 15:47:17.947831 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 15:47:17.951404 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 15:47:17.952555 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 15:47:17.954076 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 15:47:17.955575 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 15:47:17.956256 systemd[1]: Reached target basic.target - Basic System. Nov 5 15:47:17.957002 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 15:47:17.957038 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 15:47:17.958078 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 15:47:17.959995 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 5 15:47:17.964132 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 15:47:17.968099 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 15:47:17.970388 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 15:47:17.974341 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 15:47:17.975580 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 15:47:17.979685 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 5 15:47:17.982753 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 15:47:17.987713 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 15:47:17.996907 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 15:47:18.005840 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 15:47:18.013290 jq[1571]: false Nov 5 15:47:18.015882 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 15:47:18.017582 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 15:47:18.018005 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 15:47:18.018653 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 15:47:18.022698 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 15:47:18.043200 google_oslogin_nss_cache[1573]: oslogin_cache_refresh[1573]: Refreshing passwd entry cache Nov 5 15:47:18.043210 oslogin_cache_refresh[1573]: Refreshing passwd entry cache Nov 5 15:47:18.045072 google_oslogin_nss_cache[1573]: oslogin_cache_refresh[1573]: Failure getting users, quitting Nov 5 15:47:18.045072 google_oslogin_nss_cache[1573]: oslogin_cache_refresh[1573]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 15:47:18.045063 oslogin_cache_refresh[1573]: Failure getting users, quitting Nov 5 15:47:18.045353 google_oslogin_nss_cache[1573]: oslogin_cache_refresh[1573]: Refreshing group entry cache Nov 5 15:47:18.045078 oslogin_cache_refresh[1573]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 15:47:18.045116 oslogin_cache_refresh[1573]: Refreshing group entry cache Nov 5 15:47:18.050863 oslogin_cache_refresh[1573]: Failure getting groups, quitting Nov 5 15:47:18.049632 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 15:47:18.053224 google_oslogin_nss_cache[1573]: oslogin_cache_refresh[1573]: Failure getting groups, quitting Nov 5 15:47:18.053224 google_oslogin_nss_cache[1573]: oslogin_cache_refresh[1573]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 15:47:18.050877 oslogin_cache_refresh[1573]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 15:47:18.051881 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 15:47:18.055270 jq[1582]: true Nov 5 15:47:18.052641 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 15:47:18.063112 coreos-metadata[1568]: Nov 05 15:47:18.061 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Nov 5 15:47:18.060414 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 15:47:18.061681 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 15:47:18.065015 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 5 15:47:18.065244 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 5 15:47:18.081939 extend-filesystems[1572]: Found /dev/sda6 Nov 5 15:47:18.085657 jq[1591]: true Nov 5 15:47:18.093898 extend-filesystems[1572]: Found /dev/sda9 Nov 5 15:47:18.096520 extend-filesystems[1572]: Checking size of /dev/sda9 Nov 5 15:47:18.105840 (ntainerd)[1603]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 5 15:47:18.120932 extend-filesystems[1572]: Resized partition /dev/sda9 Nov 5 15:47:18.126195 update_engine[1581]: I20251105 15:47:18.125819 1581 main.cc:92] Flatcar Update Engine starting Nov 5 15:47:18.126402 extend-filesystems[1622]: resize2fs 1.47.3 (8-Jul-2025) Nov 5 15:47:18.136673 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19377147 blocks Nov 5 15:47:18.139590 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 15:47:18.139863 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 15:47:18.150518 tar[1589]: linux-amd64/LICENSE Nov 5 15:47:18.150518 tar[1589]: linux-amd64/helm Nov 5 15:47:18.154577 dbus-daemon[1569]: [system] SELinux support is enabled Nov 5 15:47:18.154833 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 15:47:18.160436 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 15:47:18.160468 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 15:47:18.162029 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 15:47:18.162051 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 15:47:18.172814 systemd[1]: Started update-engine.service - Update Engine. Nov 5 15:47:18.177251 update_engine[1581]: I20251105 15:47:18.173843 1581 update_check_scheduler.cc:74] Next update check in 7m1s Nov 5 15:47:18.180700 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 15:47:18.324102 systemd-networkd[1515]: eth0: DHCPv4 address 172.239.60.160/24, gateway 172.239.60.1 acquired from 23.40.197.110 Nov 5 15:47:18.324234 dbus-daemon[1569]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1515 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 5 15:47:18.325421 systemd-timesyncd[1516]: Network configuration changed, trying to establish connection. Nov 5 15:47:18.343788 bash[1643]: Updated "/home/core/.ssh/authorized_keys" Nov 5 15:47:18.330673 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 5 15:47:18.345143 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 15:47:18.368216 systemd[1]: Starting sshkeys.service... Nov 5 15:47:18.443530 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 5 15:47:18.451302 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 5 15:47:18.453807 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 5 15:47:18.454257 dbus-daemon[1569]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 5 15:47:18.456857 dbus-daemon[1569]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1645 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 5 15:47:18.486623 systemd[1]: Starting polkit.service - Authorization Manager... Nov 5 15:47:18.509539 systemd-logind[1580]: Watching system buttons on /dev/input/event2 (Power Button) Nov 5 15:47:18.509574 systemd-logind[1580]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 5 15:47:18.513754 systemd-logind[1580]: New seat seat0. Nov 5 15:47:18.527420 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 15:47:18.565454 locksmithd[1636]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 15:47:18.605158 kernel: EXT4-fs (sda9): resized filesystem to 19377147 Nov 5 15:47:18.632033 coreos-metadata[1654]: Nov 05 15:47:18.620 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Nov 5 15:47:18.634535 extend-filesystems[1622]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 5 15:47:18.634535 extend-filesystems[1622]: old_desc_blocks = 1, new_desc_blocks = 10 Nov 5 15:47:18.634535 extend-filesystems[1622]: The filesystem on /dev/sda9 is now 19377147 (4k) blocks long. Nov 5 15:47:18.634254 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 15:47:18.647399 extend-filesystems[1572]: Resized filesystem in /dev/sda9 Nov 5 15:47:18.635534 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 15:47:18.660900 sshd_keygen[1615]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 15:47:18.695242 containerd[1603]: time="2025-11-05T15:47:18Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 15:47:18.698687 containerd[1603]: time="2025-11-05T15:47:18.698662852Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 5 15:47:18.727515 containerd[1603]: time="2025-11-05T15:47:18.726687214Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.4µs" Nov 5 15:47:18.727515 containerd[1603]: time="2025-11-05T15:47:18.726708594Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 15:47:18.727515 containerd[1603]: time="2025-11-05T15:47:18.726724224Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 15:47:18.727515 containerd[1603]: time="2025-11-05T15:47:18.726876194Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 15:47:18.727515 containerd[1603]: time="2025-11-05T15:47:18.726895373Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 15:47:18.727515 containerd[1603]: time="2025-11-05T15:47:18.726917203Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 15:47:18.727515 containerd[1603]: time="2025-11-05T15:47:18.726986823Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 15:47:18.727515 containerd[1603]: time="2025-11-05T15:47:18.726997053Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 15:47:18.727515 containerd[1603]: time="2025-11-05T15:47:18.727190033Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 15:47:18.727515 containerd[1603]: time="2025-11-05T15:47:18.727203773Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 15:47:18.727515 containerd[1603]: time="2025-11-05T15:47:18.727213203Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 15:47:18.727515 containerd[1603]: time="2025-11-05T15:47:18.727220423Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 15:47:18.727747 containerd[1603]: time="2025-11-05T15:47:18.727315383Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 15:47:18.729665 containerd[1603]: time="2025-11-05T15:47:18.729580811Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 15:47:18.729665 containerd[1603]: time="2025-11-05T15:47:18.729627231Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 15:47:18.729665 containerd[1603]: time="2025-11-05T15:47:18.729637921Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 15:47:18.731634 containerd[1603]: time="2025-11-05T15:47:18.731051019Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 15:47:18.731634 containerd[1603]: time="2025-11-05T15:47:18.731328109Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 15:47:18.731634 containerd[1603]: time="2025-11-05T15:47:18.731396429Z" level=info msg="metadata content store policy set" policy=shared Nov 5 15:47:18.736072 coreos-metadata[1654]: Nov 05 15:47:18.735 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Nov 5 15:47:18.738401 containerd[1603]: time="2025-11-05T15:47:18.737439073Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 15:47:18.738401 containerd[1603]: time="2025-11-05T15:47:18.737481443Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 15:47:18.738401 containerd[1603]: time="2025-11-05T15:47:18.737567283Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 15:47:18.738401 containerd[1603]: time="2025-11-05T15:47:18.737592343Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 15:47:18.738401 containerd[1603]: time="2025-11-05T15:47:18.737603103Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 15:47:18.738401 containerd[1603]: time="2025-11-05T15:47:18.737611963Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 15:47:18.738401 containerd[1603]: time="2025-11-05T15:47:18.737622853Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 15:47:18.738401 containerd[1603]: time="2025-11-05T15:47:18.737633023Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 15:47:18.738401 containerd[1603]: time="2025-11-05T15:47:18.737642203Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 15:47:18.738401 containerd[1603]: time="2025-11-05T15:47:18.737651363Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 15:47:18.738401 containerd[1603]: time="2025-11-05T15:47:18.737658863Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 15:47:18.738401 containerd[1603]: time="2025-11-05T15:47:18.737669173Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 15:47:18.738401 containerd[1603]: time="2025-11-05T15:47:18.737773193Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 15:47:18.738401 containerd[1603]: time="2025-11-05T15:47:18.737790513Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 15:47:18.738952 containerd[1603]: time="2025-11-05T15:47:18.737803533Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 15:47:18.738952 containerd[1603]: time="2025-11-05T15:47:18.737812923Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 15:47:18.738952 containerd[1603]: time="2025-11-05T15:47:18.737824343Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 15:47:18.738952 containerd[1603]: time="2025-11-05T15:47:18.737832773Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 15:47:18.738952 containerd[1603]: time="2025-11-05T15:47:18.737841543Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 15:47:18.738952 containerd[1603]: time="2025-11-05T15:47:18.737849993Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 15:47:18.738952 containerd[1603]: time="2025-11-05T15:47:18.737859433Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 15:47:18.738952 containerd[1603]: time="2025-11-05T15:47:18.737868773Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 15:47:18.738952 containerd[1603]: time="2025-11-05T15:47:18.737884523Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 15:47:18.738952 containerd[1603]: time="2025-11-05T15:47:18.737941062Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 15:47:18.738952 containerd[1603]: time="2025-11-05T15:47:18.737952882Z" level=info msg="Start snapshots syncer" Nov 5 15:47:18.738952 containerd[1603]: time="2025-11-05T15:47:18.737974392Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 15:47:18.739151 containerd[1603]: time="2025-11-05T15:47:18.738149492Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 15:47:18.739151 containerd[1603]: time="2025-11-05T15:47:18.738192822Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 15:47:18.744042 containerd[1603]: time="2025-11-05T15:47:18.743535637Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 15:47:18.744042 containerd[1603]: time="2025-11-05T15:47:18.743683617Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 15:47:18.744042 containerd[1603]: time="2025-11-05T15:47:18.743712557Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 15:47:18.744042 containerd[1603]: time="2025-11-05T15:47:18.743724347Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 15:47:18.744042 containerd[1603]: time="2025-11-05T15:47:18.743733977Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 15:47:18.744042 containerd[1603]: time="2025-11-05T15:47:18.743744577Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 15:47:18.744042 containerd[1603]: time="2025-11-05T15:47:18.743753997Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 15:47:18.744042 containerd[1603]: time="2025-11-05T15:47:18.743763497Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 15:47:18.744042 containerd[1603]: time="2025-11-05T15:47:18.743782987Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 15:47:18.744042 containerd[1603]: time="2025-11-05T15:47:18.743792087Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 15:47:18.744042 containerd[1603]: time="2025-11-05T15:47:18.743806297Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 15:47:18.744042 containerd[1603]: time="2025-11-05T15:47:18.743854467Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 15:47:18.744042 containerd[1603]: time="2025-11-05T15:47:18.743867717Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 15:47:18.744042 containerd[1603]: time="2025-11-05T15:47:18.743875237Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 15:47:18.744296 containerd[1603]: time="2025-11-05T15:47:18.743883337Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 15:47:18.744296 containerd[1603]: time="2025-11-05T15:47:18.743927766Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 15:47:18.744296 containerd[1603]: time="2025-11-05T15:47:18.743943496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 15:47:18.744296 containerd[1603]: time="2025-11-05T15:47:18.743953456Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 15:47:18.744296 containerd[1603]: time="2025-11-05T15:47:18.743968696Z" level=info msg="runtime interface created" Nov 5 15:47:18.744296 containerd[1603]: time="2025-11-05T15:47:18.743974276Z" level=info msg="created NRI interface" Nov 5 15:47:18.744296 containerd[1603]: time="2025-11-05T15:47:18.743981236Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 15:47:18.744296 containerd[1603]: time="2025-11-05T15:47:18.743991286Z" level=info msg="Connect containerd service" Nov 5 15:47:18.744296 containerd[1603]: time="2025-11-05T15:47:18.744012206Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 15:47:18.747345 containerd[1603]: time="2025-11-05T15:47:18.747103493Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 15:47:18.765464 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 15:47:18.775463 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 15:47:18.774233 polkitd[1656]: Started polkitd version 126 Nov 5 15:47:18.783347 polkitd[1656]: Loading rules from directory /etc/polkit-1/rules.d Nov 5 15:47:18.783960 polkitd[1656]: Loading rules from directory /run/polkit-1/rules.d Nov 5 15:47:18.784053 polkitd[1656]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 5 15:47:18.784304 polkitd[1656]: Loading rules from directory /usr/local/share/polkit-1/rules.d Nov 5 15:47:18.784387 polkitd[1656]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 5 15:47:18.784466 polkitd[1656]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 5 15:47:18.785033 polkitd[1656]: Finished loading, compiling and executing 2 rules Nov 5 15:47:18.785317 systemd[1]: Started polkit.service - Authorization Manager. Nov 5 15:47:18.791059 dbus-daemon[1569]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 5 15:47:18.793134 polkitd[1656]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 5 15:47:18.793933 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 15:47:18.794193 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 15:47:18.801268 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 15:47:18.822661 systemd-hostnamed[1645]: Hostname set to <172-239-60-160> (transient) Nov 5 15:47:18.823413 systemd-resolved[1286]: System hostname changed to '172-239-60-160'. Nov 5 15:47:18.852767 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 15:47:18.856950 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 15:47:18.866419 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 5 15:47:18.871821 coreos-metadata[1654]: Nov 05 15:47:18.871 INFO Fetch successful Nov 5 15:47:18.892011 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 15:47:18.920104 update-ssh-keys[1701]: Updated "/home/core/.ssh/authorized_keys" Nov 5 15:47:18.921906 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 5 15:47:18.926763 systemd[1]: Finished sshkeys.service. Nov 5 15:47:18.931796 tar[1589]: linux-amd64/README.md Nov 5 15:47:18.936388 containerd[1603]: time="2025-11-05T15:47:18.936351674Z" level=info msg="Start subscribing containerd event" Nov 5 15:47:18.936448 containerd[1603]: time="2025-11-05T15:47:18.936398014Z" level=info msg="Start recovering state" Nov 5 15:47:18.936566 containerd[1603]: time="2025-11-05T15:47:18.936544564Z" level=info msg="Start event monitor" Nov 5 15:47:18.936607 containerd[1603]: time="2025-11-05T15:47:18.936565454Z" level=info msg="Start cni network conf syncer for default" Nov 5 15:47:18.936607 containerd[1603]: time="2025-11-05T15:47:18.936591994Z" level=info msg="Start streaming server" Nov 5 15:47:18.936607 containerd[1603]: time="2025-11-05T15:47:18.936600364Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 15:47:18.936607 containerd[1603]: time="2025-11-05T15:47:18.936606744Z" level=info msg="runtime interface starting up..." Nov 5 15:47:18.936690 containerd[1603]: time="2025-11-05T15:47:18.936612564Z" level=info msg="starting plugins..." Nov 5 15:47:18.936690 containerd[1603]: time="2025-11-05T15:47:18.936626294Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 15:47:18.937147 containerd[1603]: time="2025-11-05T15:47:18.937118183Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 15:47:18.937297 containerd[1603]: time="2025-11-05T15:47:18.937274763Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 15:47:18.938712 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 15:47:18.939466 containerd[1603]: time="2025-11-05T15:47:18.939431801Z" level=info msg="containerd successfully booted in 0.244640s" Nov 5 15:47:18.948182 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 15:47:19.071368 coreos-metadata[1568]: Nov 05 15:47:19.071 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Nov 5 15:47:19.160944 coreos-metadata[1568]: Nov 05 15:47:19.160 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Nov 5 15:47:19.344582 coreos-metadata[1568]: Nov 05 15:47:19.344 INFO Fetch successful Nov 5 15:47:19.344582 coreos-metadata[1568]: Nov 05 15:47:19.344 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Nov 5 15:47:19.500748 systemd-networkd[1515]: eth0: Gained IPv6LL Nov 5 15:47:19.501376 systemd-timesyncd[1516]: Network configuration changed, trying to establish connection. Nov 5 15:47:19.504237 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 15:47:19.505529 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 15:47:19.508193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:47:19.512678 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 15:47:19.538284 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 15:47:19.601721 coreos-metadata[1568]: Nov 05 15:47:19.601 INFO Fetch successful Nov 5 15:47:19.707667 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 5 15:47:19.709150 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 15:47:20.395729 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:47:20.397023 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 15:47:20.399598 systemd[1]: Startup finished in 2.412s (kernel) + 6.086s (initrd) + 5.601s (userspace) = 14.100s. Nov 5 15:47:20.403977 (kubelet)[1750]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:47:20.896904 kubelet[1750]: E1105 15:47:20.896794 1750 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:47:20.899917 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:47:20.900119 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:47:20.900515 systemd[1]: kubelet.service: Consumed 847ms CPU time, 265.7M memory peak. Nov 5 15:47:21.001576 systemd-timesyncd[1516]: Network configuration changed, trying to establish connection. Nov 5 15:47:21.805965 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 15:47:21.807861 systemd[1]: Started sshd@0-172.239.60.160:22-139.178.89.65:40408.service - OpenSSH per-connection server daemon (139.178.89.65:40408). Nov 5 15:47:22.170733 sshd[1762]: Accepted publickey for core from 139.178.89.65 port 40408 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:47:22.174014 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:47:22.181911 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 15:47:22.183761 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 15:47:22.191831 systemd-logind[1580]: New session 1 of user core. Nov 5 15:47:22.201201 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 15:47:22.204963 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 15:47:22.215331 (systemd)[1767]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 15:47:22.217833 systemd-logind[1580]: New session c1 of user core. Nov 5 15:47:22.346573 systemd[1767]: Queued start job for default target default.target. Nov 5 15:47:22.358874 systemd[1767]: Created slice app.slice - User Application Slice. Nov 5 15:47:22.358900 systemd[1767]: Reached target paths.target - Paths. Nov 5 15:47:22.358944 systemd[1767]: Reached target timers.target - Timers. Nov 5 15:47:22.360382 systemd[1767]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 15:47:22.371365 systemd[1767]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 15:47:22.371510 systemd[1767]: Reached target sockets.target - Sockets. Nov 5 15:47:22.371617 systemd[1767]: Reached target basic.target - Basic System. Nov 5 15:47:22.371674 systemd[1767]: Reached target default.target - Main User Target. Nov 5 15:47:22.371708 systemd[1767]: Startup finished in 147ms. Nov 5 15:47:22.372354 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 15:47:22.379624 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 15:47:22.638005 systemd[1]: Started sshd@1-172.239.60.160:22-139.178.89.65:40412.service - OpenSSH per-connection server daemon (139.178.89.65:40412). Nov 5 15:47:22.977380 sshd[1778]: Accepted publickey for core from 139.178.89.65 port 40412 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:47:22.978790 sshd-session[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:47:22.982835 systemd-logind[1580]: New session 2 of user core. Nov 5 15:47:22.992612 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 15:47:23.021272 systemd-timesyncd[1516]: Network configuration changed, trying to establish connection. Nov 5 15:47:23.227212 sshd[1781]: Connection closed by 139.178.89.65 port 40412 Nov 5 15:47:23.227242 sshd-session[1778]: pam_unix(sshd:session): session closed for user core Nov 5 15:47:23.231928 systemd-logind[1580]: Session 2 logged out. Waiting for processes to exit. Nov 5 15:47:23.233117 systemd[1]: sshd@1-172.239.60.160:22-139.178.89.65:40412.service: Deactivated successfully. Nov 5 15:47:23.234674 systemd[1]: session-2.scope: Deactivated successfully. Nov 5 15:47:23.235895 systemd-logind[1580]: Removed session 2. Nov 5 15:47:23.301415 systemd[1]: Started sshd@2-172.239.60.160:22-139.178.89.65:40426.service - OpenSSH per-connection server daemon (139.178.89.65:40426). Nov 5 15:47:23.653093 sshd[1787]: Accepted publickey for core from 139.178.89.65 port 40426 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:47:23.654607 sshd-session[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:47:23.659543 systemd-logind[1580]: New session 3 of user core. Nov 5 15:47:23.667591 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 15:47:23.907019 sshd[1790]: Connection closed by 139.178.89.65 port 40426 Nov 5 15:47:23.907649 sshd-session[1787]: pam_unix(sshd:session): session closed for user core Nov 5 15:47:23.911595 systemd-logind[1580]: Session 3 logged out. Waiting for processes to exit. Nov 5 15:47:23.911783 systemd[1]: sshd@2-172.239.60.160:22-139.178.89.65:40426.service: Deactivated successfully. Nov 5 15:47:23.913683 systemd[1]: session-3.scope: Deactivated successfully. Nov 5 15:47:23.915367 systemd-logind[1580]: Removed session 3. Nov 5 15:47:23.970845 systemd[1]: Started sshd@3-172.239.60.160:22-139.178.89.65:40442.service - OpenSSH per-connection server daemon (139.178.89.65:40442). Nov 5 15:47:24.316091 sshd[1796]: Accepted publickey for core from 139.178.89.65 port 40442 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:47:24.317901 sshd-session[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:47:24.323962 systemd-logind[1580]: New session 4 of user core. Nov 5 15:47:24.330691 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 15:47:24.569283 sshd[1799]: Connection closed by 139.178.89.65 port 40442 Nov 5 15:47:24.570258 sshd-session[1796]: pam_unix(sshd:session): session closed for user core Nov 5 15:47:24.574458 systemd[1]: sshd@3-172.239.60.160:22-139.178.89.65:40442.service: Deactivated successfully. Nov 5 15:47:24.576372 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 15:47:24.577252 systemd-logind[1580]: Session 4 logged out. Waiting for processes to exit. Nov 5 15:47:24.578678 systemd-logind[1580]: Removed session 4. Nov 5 15:47:24.634030 systemd[1]: Started sshd@4-172.239.60.160:22-139.178.89.65:40456.service - OpenSSH per-connection server daemon (139.178.89.65:40456). Nov 5 15:47:24.977925 sshd[1805]: Accepted publickey for core from 139.178.89.65 port 40456 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:47:24.979436 sshd-session[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:47:24.988419 systemd-logind[1580]: New session 5 of user core. Nov 5 15:47:25.004781 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 15:47:25.188878 sudo[1809]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 15:47:25.189202 sudo[1809]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:47:25.203088 sudo[1809]: pam_unix(sudo:session): session closed for user root Nov 5 15:47:25.254744 sshd[1808]: Connection closed by 139.178.89.65 port 40456 Nov 5 15:47:25.255600 sshd-session[1805]: pam_unix(sshd:session): session closed for user core Nov 5 15:47:25.259694 systemd[1]: sshd@4-172.239.60.160:22-139.178.89.65:40456.service: Deactivated successfully. Nov 5 15:47:25.261723 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 15:47:25.264222 systemd-logind[1580]: Session 5 logged out. Waiting for processes to exit. Nov 5 15:47:25.265312 systemd-logind[1580]: Removed session 5. Nov 5 15:47:25.317853 systemd[1]: Started sshd@5-172.239.60.160:22-139.178.89.65:40468.service - OpenSSH per-connection server daemon (139.178.89.65:40468). Nov 5 15:47:25.660002 sshd[1815]: Accepted publickey for core from 139.178.89.65 port 40468 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:47:25.661157 sshd-session[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:47:25.665356 systemd-logind[1580]: New session 6 of user core. Nov 5 15:47:25.669675 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 15:47:25.854205 sudo[1820]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 15:47:25.854585 sudo[1820]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:47:25.858793 sudo[1820]: pam_unix(sudo:session): session closed for user root Nov 5 15:47:25.865097 sudo[1819]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 15:47:25.865648 sudo[1819]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:47:25.874911 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 15:47:25.918058 augenrules[1842]: No rules Nov 5 15:47:25.919673 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 15:47:25.919925 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 15:47:25.920833 sudo[1819]: pam_unix(sudo:session): session closed for user root Nov 5 15:47:25.970729 sshd[1818]: Connection closed by 139.178.89.65 port 40468 Nov 5 15:47:25.971234 sshd-session[1815]: pam_unix(sshd:session): session closed for user core Nov 5 15:47:25.976009 systemd[1]: sshd@5-172.239.60.160:22-139.178.89.65:40468.service: Deactivated successfully. Nov 5 15:47:25.977861 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 15:47:25.979025 systemd-logind[1580]: Session 6 logged out. Waiting for processes to exit. Nov 5 15:47:25.980178 systemd-logind[1580]: Removed session 6. Nov 5 15:47:26.033329 systemd[1]: Started sshd@6-172.239.60.160:22-139.178.89.65:51606.service - OpenSSH per-connection server daemon (139.178.89.65:51606). Nov 5 15:47:26.384222 sshd[1851]: Accepted publickey for core from 139.178.89.65 port 51606 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:47:26.386603 sshd-session[1851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:47:26.392847 systemd-logind[1580]: New session 7 of user core. Nov 5 15:47:26.404696 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 15:47:26.588574 sudo[1855]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 15:47:26.588939 sudo[1855]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:47:26.913358 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 15:47:26.930800 (dockerd)[1874]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 15:47:27.158378 dockerd[1874]: time="2025-11-05T15:47:27.158297452Z" level=info msg="Starting up" Nov 5 15:47:27.159170 dockerd[1874]: time="2025-11-05T15:47:27.159130301Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 15:47:27.172861 dockerd[1874]: time="2025-11-05T15:47:27.172570278Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 15:47:27.214665 dockerd[1874]: time="2025-11-05T15:47:27.214636996Z" level=info msg="Loading containers: start." Nov 5 15:47:27.226512 kernel: Initializing XFRM netlink socket Nov 5 15:47:27.430503 systemd-timesyncd[1516]: Network configuration changed, trying to establish connection. Nov 5 15:47:27.438216 systemd-timesyncd[1516]: Network configuration changed, trying to establish connection. Nov 5 15:47:27.475240 systemd-networkd[1515]: docker0: Link UP Nov 5 15:47:27.475605 systemd-timesyncd[1516]: Network configuration changed, trying to establish connection. Nov 5 15:47:27.477636 dockerd[1874]: time="2025-11-05T15:47:27.477609043Z" level=info msg="Loading containers: done." Nov 5 15:47:27.489179 dockerd[1874]: time="2025-11-05T15:47:27.488888381Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 15:47:27.489179 dockerd[1874]: time="2025-11-05T15:47:27.488941851Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 15:47:27.489179 dockerd[1874]: time="2025-11-05T15:47:27.489017451Z" level=info msg="Initializing buildkit" Nov 5 15:47:27.490263 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2438819890-merged.mount: Deactivated successfully. Nov 5 15:47:27.509064 dockerd[1874]: time="2025-11-05T15:47:27.509034311Z" level=info msg="Completed buildkit initialization" Nov 5 15:47:27.515791 dockerd[1874]: time="2025-11-05T15:47:27.515773245Z" level=info msg="Daemon has completed initialization" Nov 5 15:47:27.515916 dockerd[1874]: time="2025-11-05T15:47:27.515882494Z" level=info msg="API listen on /run/docker.sock" Nov 5 15:47:27.515994 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 15:47:28.114245 containerd[1603]: time="2025-11-05T15:47:28.114210916Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 5 15:47:28.869557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1916199993.mount: Deactivated successfully. Nov 5 15:47:30.042205 containerd[1603]: time="2025-11-05T15:47:30.042138148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:47:30.043078 containerd[1603]: time="2025-11-05T15:47:30.042976647Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 5 15:47:30.043479 containerd[1603]: time="2025-11-05T15:47:30.043440537Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:47:30.045513 containerd[1603]: time="2025-11-05T15:47:30.045380305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:47:30.046793 containerd[1603]: time="2025-11-05T15:47:30.046175564Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.931928998s" Nov 5 15:47:30.046793 containerd[1603]: time="2025-11-05T15:47:30.046204234Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 5 15:47:30.047355 containerd[1603]: time="2025-11-05T15:47:30.047332743Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 5 15:47:31.150818 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 15:47:31.153318 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:47:31.354634 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:47:31.363762 (kubelet)[2153]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:47:31.422520 kubelet[2153]: E1105 15:47:31.422399 2153 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:47:31.427371 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:47:31.427577 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:47:31.428034 systemd[1]: kubelet.service: Consumed 193ms CPU time, 110.4M memory peak. Nov 5 15:47:31.642212 containerd[1603]: time="2025-11-05T15:47:31.642147748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:47:31.643184 containerd[1603]: time="2025-11-05T15:47:31.642996677Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 5 15:47:31.643875 containerd[1603]: time="2025-11-05T15:47:31.643845566Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:47:31.646158 containerd[1603]: time="2025-11-05T15:47:31.646129114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:47:31.646965 containerd[1603]: time="2025-11-05T15:47:31.646933553Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.59957421s" Nov 5 15:47:31.647008 containerd[1603]: time="2025-11-05T15:47:31.646965193Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 5 15:47:31.647380 containerd[1603]: time="2025-11-05T15:47:31.647349653Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 5 15:47:32.957867 containerd[1603]: time="2025-11-05T15:47:32.957808472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:47:32.958766 containerd[1603]: time="2025-11-05T15:47:32.958556552Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 5 15:47:32.959325 containerd[1603]: time="2025-11-05T15:47:32.959301761Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:47:32.961250 containerd[1603]: time="2025-11-05T15:47:32.961229239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:47:32.962052 containerd[1603]: time="2025-11-05T15:47:32.962032428Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.314653805s" Nov 5 15:47:32.962124 containerd[1603]: time="2025-11-05T15:47:32.962110598Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 5 15:47:32.962555 containerd[1603]: time="2025-11-05T15:47:32.962532788Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 5 15:47:34.128320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount247781320.mount: Deactivated successfully. Nov 5 15:47:34.483405 containerd[1603]: time="2025-11-05T15:47:34.483087237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:47:34.484309 containerd[1603]: time="2025-11-05T15:47:34.484039476Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 5 15:47:34.485520 containerd[1603]: time="2025-11-05T15:47:34.485470975Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:47:34.487302 containerd[1603]: time="2025-11-05T15:47:34.487262793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:47:34.487962 containerd[1603]: time="2025-11-05T15:47:34.487798502Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.525241934s" Nov 5 15:47:34.487962 containerd[1603]: time="2025-11-05T15:47:34.487826882Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 5 15:47:34.488273 containerd[1603]: time="2025-11-05T15:47:34.488233032Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 5 15:47:35.200961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1015412485.mount: Deactivated successfully. Nov 5 15:47:35.975610 containerd[1603]: time="2025-11-05T15:47:35.975557285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:47:35.976863 containerd[1603]: time="2025-11-05T15:47:35.976825823Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 5 15:47:35.977008 containerd[1603]: time="2025-11-05T15:47:35.976985793Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:47:35.979317 containerd[1603]: time="2025-11-05T15:47:35.979289761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:47:35.980632 containerd[1603]: time="2025-11-05T15:47:35.980610600Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.492341268s" Nov 5 15:47:35.980708 containerd[1603]: time="2025-11-05T15:47:35.980694400Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 5 15:47:35.981757 containerd[1603]: time="2025-11-05T15:47:35.981730749Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 5 15:47:36.611999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount269362274.mount: Deactivated successfully. Nov 5 15:47:36.617846 containerd[1603]: time="2025-11-05T15:47:36.617810842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:47:36.618660 containerd[1603]: time="2025-11-05T15:47:36.618430832Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 5 15:47:36.619278 containerd[1603]: time="2025-11-05T15:47:36.619232431Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:47:36.621172 containerd[1603]: time="2025-11-05T15:47:36.621140269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:47:36.622160 containerd[1603]: time="2025-11-05T15:47:36.622136158Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 640.37839ms" Nov 5 15:47:36.622223 containerd[1603]: time="2025-11-05T15:47:36.622161928Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 5 15:47:36.622756 containerd[1603]: time="2025-11-05T15:47:36.622714898Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 5 15:47:37.413882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1791912628.mount: Deactivated successfully. Nov 5 15:47:38.937667 containerd[1603]: time="2025-11-05T15:47:38.937589403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:47:38.938614 containerd[1603]: time="2025-11-05T15:47:38.938393472Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 5 15:47:38.939390 containerd[1603]: time="2025-11-05T15:47:38.939020841Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:47:38.941046 containerd[1603]: time="2025-11-05T15:47:38.941025239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:47:38.941961 containerd[1603]: time="2025-11-05T15:47:38.941940028Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.3191987s" Nov 5 15:47:38.942035 containerd[1603]: time="2025-11-05T15:47:38.942021338Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 5 15:47:40.685921 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:47:40.686065 systemd[1]: kubelet.service: Consumed 193ms CPU time, 110.4M memory peak. Nov 5 15:47:40.688204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:47:40.725235 systemd[1]: Reload requested from client PID 2309 ('systemctl') (unit session-7.scope)... Nov 5 15:47:40.725277 systemd[1]: Reloading... Nov 5 15:47:40.856241 zram_generator::config[2354]: No configuration found. Nov 5 15:47:41.085985 systemd[1]: Reloading finished in 360 ms. Nov 5 15:47:41.136041 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 5 15:47:41.136144 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 5 15:47:41.136418 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:47:41.136461 systemd[1]: kubelet.service: Consumed 142ms CPU time, 98.4M memory peak. Nov 5 15:47:41.138517 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:47:41.310030 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:47:41.316912 (kubelet)[2408]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 15:47:41.353846 kubelet[2408]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:47:41.353846 kubelet[2408]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 15:47:41.353846 kubelet[2408]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:47:41.353846 kubelet[2408]: I1105 15:47:41.353656 2408 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 15:47:41.590982 kubelet[2408]: I1105 15:47:41.590944 2408 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 5 15:47:41.590982 kubelet[2408]: I1105 15:47:41.590969 2408 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 15:47:41.591204 kubelet[2408]: I1105 15:47:41.591183 2408 server.go:954] "Client rotation is on, will bootstrap in background" Nov 5 15:47:41.627567 kubelet[2408]: E1105 15:47:41.626442 2408 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.239.60.160:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.239.60.160:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:47:41.629050 kubelet[2408]: I1105 15:47:41.629028 2408 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 15:47:41.637082 kubelet[2408]: I1105 15:47:41.637062 2408 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 15:47:41.641570 kubelet[2408]: I1105 15:47:41.641542 2408 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 15:47:41.641790 kubelet[2408]: I1105 15:47:41.641761 2408 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 15:47:41.641924 kubelet[2408]: I1105 15:47:41.641787 2408 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-239-60-160","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 15:47:41.642687 kubelet[2408]: I1105 15:47:41.642654 2408 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 15:47:41.642687 kubelet[2408]: I1105 15:47:41.642683 2408 container_manager_linux.go:304] "Creating device plugin manager" Nov 5 15:47:41.642875 kubelet[2408]: I1105 15:47:41.642849 2408 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:47:41.647881 kubelet[2408]: I1105 15:47:41.647856 2408 kubelet.go:446] "Attempting to sync node with API server" Nov 5 15:47:41.647941 kubelet[2408]: I1105 15:47:41.647896 2408 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 15:47:41.647941 kubelet[2408]: I1105 15:47:41.647928 2408 kubelet.go:352] "Adding apiserver pod source" Nov 5 15:47:41.648215 kubelet[2408]: I1105 15:47:41.647946 2408 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 15:47:41.657569 kubelet[2408]: I1105 15:47:41.657438 2408 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 15:47:41.657841 kubelet[2408]: W1105 15:47:41.657807 2408 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.239.60.160:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-239-60-160&limit=500&resourceVersion=0": dial tcp 172.239.60.160:6443: connect: connection refused Nov 5 15:47:41.657926 kubelet[2408]: E1105 15:47:41.657909 2408 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.239.60.160:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-239-60-160&limit=500&resourceVersion=0\": dial tcp 172.239.60.160:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:47:41.657972 kubelet[2408]: I1105 15:47:41.657917 2408 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 5 15:47:41.658057 kubelet[2408]: W1105 15:47:41.658047 2408 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 15:47:41.658771 kubelet[2408]: W1105 15:47:41.658048 2408 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.239.60.160:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.239.60.160:6443: connect: connection refused Nov 5 15:47:41.658771 kubelet[2408]: E1105 15:47:41.658216 2408 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.239.60.160:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.239.60.160:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:47:41.660407 kubelet[2408]: I1105 15:47:41.660371 2408 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 15:47:41.660472 kubelet[2408]: I1105 15:47:41.660413 2408 server.go:1287] "Started kubelet" Nov 5 15:47:41.660629 kubelet[2408]: I1105 15:47:41.660608 2408 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 15:47:41.662030 kubelet[2408]: I1105 15:47:41.661998 2408 server.go:479] "Adding debug handlers to kubelet server" Nov 5 15:47:41.666174 kubelet[2408]: I1105 15:47:41.666139 2408 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 15:47:41.668458 kubelet[2408]: I1105 15:47:41.668012 2408 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 15:47:41.668458 kubelet[2408]: I1105 15:47:41.668222 2408 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 15:47:41.669910 kubelet[2408]: E1105 15:47:41.668729 2408 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.239.60.160:6443/api/v1/namespaces/default/events\": dial tcp 172.239.60.160:6443: connect: connection refused" event="&Event{ObjectMeta:{172-239-60-160.187526f4b674a7ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-239-60-160,UID:172-239-60-160,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-239-60-160,},FirstTimestamp:2025-11-05 15:47:41.66039134 +0000 UTC m=+0.339832381,LastTimestamp:2025-11-05 15:47:41.66039134 +0000 UTC m=+0.339832381,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-239-60-160,}" Nov 5 15:47:41.671191 kubelet[2408]: I1105 15:47:41.670709 2408 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 15:47:41.671610 kubelet[2408]: I1105 15:47:41.671582 2408 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 15:47:41.671888 kubelet[2408]: E1105 15:47:41.671860 2408 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-239-60-160\" not found" Nov 5 15:47:41.676086 kubelet[2408]: I1105 15:47:41.675361 2408 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 15:47:41.676086 kubelet[2408]: I1105 15:47:41.675415 2408 reconciler.go:26] "Reconciler: start to sync state" Nov 5 15:47:41.677039 kubelet[2408]: W1105 15:47:41.677000 2408 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.239.60.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.239.60.160:6443: connect: connection refused Nov 5 15:47:41.677099 kubelet[2408]: E1105 15:47:41.677053 2408 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.239.60.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.239.60.160:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:47:41.677160 kubelet[2408]: E1105 15:47:41.677127 2408 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.60.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-60-160?timeout=10s\": dial tcp 172.239.60.160:6443: connect: connection refused" interval="200ms" Nov 5 15:47:41.677881 kubelet[2408]: I1105 15:47:41.677855 2408 factory.go:221] Registration of the systemd container factory successfully Nov 5 15:47:41.677975 kubelet[2408]: I1105 15:47:41.677952 2408 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 15:47:41.680331 kubelet[2408]: I1105 15:47:41.680308 2408 factory.go:221] Registration of the containerd container factory successfully Nov 5 15:47:41.690525 kubelet[2408]: E1105 15:47:41.689926 2408 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 15:47:41.700761 kubelet[2408]: I1105 15:47:41.700721 2408 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 5 15:47:41.703633 kubelet[2408]: I1105 15:47:41.703589 2408 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 5 15:47:41.703633 kubelet[2408]: I1105 15:47:41.703634 2408 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 5 15:47:41.703762 kubelet[2408]: I1105 15:47:41.703666 2408 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 15:47:41.703762 kubelet[2408]: I1105 15:47:41.703678 2408 kubelet.go:2382] "Starting kubelet main sync loop" Nov 5 15:47:41.703762 kubelet[2408]: E1105 15:47:41.703728 2408 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 15:47:41.706406 kubelet[2408]: W1105 15:47:41.706193 2408 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.239.60.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.239.60.160:6443: connect: connection refused Nov 5 15:47:41.706406 kubelet[2408]: E1105 15:47:41.706253 2408 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.239.60.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.239.60.160:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:47:41.712191 kubelet[2408]: I1105 15:47:41.712173 2408 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 15:47:41.712191 kubelet[2408]: I1105 15:47:41.712187 2408 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 15:47:41.712314 kubelet[2408]: I1105 15:47:41.712202 2408 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:47:41.713991 kubelet[2408]: I1105 15:47:41.713974 2408 policy_none.go:49] "None policy: Start" Nov 5 15:47:41.713991 kubelet[2408]: I1105 15:47:41.713990 2408 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 15:47:41.714122 kubelet[2408]: I1105 15:47:41.714000 2408 state_mem.go:35] "Initializing new in-memory state store" Nov 5 15:47:41.720097 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 15:47:41.735026 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 15:47:41.740292 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 15:47:41.755628 kubelet[2408]: I1105 15:47:41.755608 2408 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 5 15:47:41.755903 kubelet[2408]: I1105 15:47:41.755816 2408 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 15:47:41.755903 kubelet[2408]: I1105 15:47:41.755838 2408 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 15:47:41.756324 kubelet[2408]: I1105 15:47:41.756309 2408 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 15:47:41.758244 kubelet[2408]: E1105 15:47:41.758210 2408 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 15:47:41.758571 kubelet[2408]: E1105 15:47:41.758541 2408 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-239-60-160\" not found" Nov 5 15:47:41.817009 systemd[1]: Created slice kubepods-burstable-pod9bd896d75e2152c6db83739b057ab1a0.slice - libcontainer container kubepods-burstable-pod9bd896d75e2152c6db83739b057ab1a0.slice. Nov 5 15:47:41.826781 kubelet[2408]: E1105 15:47:41.826734 2408 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-60-160\" not found" node="172-239-60-160" Nov 5 15:47:41.830112 systemd[1]: Created slice kubepods-burstable-pod7d15cf26fd49605570c9da6593e6a5ce.slice - libcontainer container kubepods-burstable-pod7d15cf26fd49605570c9da6593e6a5ce.slice. Nov 5 15:47:41.838194 kubelet[2408]: E1105 15:47:41.837969 2408 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-60-160\" not found" node="172-239-60-160" Nov 5 15:47:41.841353 systemd[1]: Created slice kubepods-burstable-pod1f65bdfde111218b6439edb2b14253bb.slice - libcontainer container kubepods-burstable-pod1f65bdfde111218b6439edb2b14253bb.slice. Nov 5 15:47:41.843551 kubelet[2408]: E1105 15:47:41.843528 2408 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-60-160\" not found" node="172-239-60-160" Nov 5 15:47:41.858468 kubelet[2408]: I1105 15:47:41.858449 2408 kubelet_node_status.go:75] "Attempting to register node" node="172-239-60-160" Nov 5 15:47:41.859129 kubelet[2408]: E1105 15:47:41.859095 2408 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.60.160:6443/api/v1/nodes\": dial tcp 172.239.60.160:6443: connect: connection refused" node="172-239-60-160" Nov 5 15:47:41.877391 kubelet[2408]: I1105 15:47:41.877247 2408 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1f65bdfde111218b6439edb2b14253bb-ca-certs\") pod \"kube-controller-manager-172-239-60-160\" (UID: \"1f65bdfde111218b6439edb2b14253bb\") " pod="kube-system/kube-controller-manager-172-239-60-160" Nov 5 15:47:41.877651 kubelet[2408]: E1105 15:47:41.877452 2408 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.60.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-60-160?timeout=10s\": dial tcp 172.239.60.160:6443: connect: connection refused" interval="400ms" Nov 5 15:47:41.978348 kubelet[2408]: I1105 15:47:41.978310 2408 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1f65bdfde111218b6439edb2b14253bb-kubeconfig\") pod \"kube-controller-manager-172-239-60-160\" (UID: \"1f65bdfde111218b6439edb2b14253bb\") " pod="kube-system/kube-controller-manager-172-239-60-160" Nov 5 15:47:41.978348 kubelet[2408]: I1105 15:47:41.978348 2408 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9bd896d75e2152c6db83739b057ab1a0-kubeconfig\") pod \"kube-scheduler-172-239-60-160\" (UID: \"9bd896d75e2152c6db83739b057ab1a0\") " pod="kube-system/kube-scheduler-172-239-60-160" Nov 5 15:47:41.978348 kubelet[2408]: I1105 15:47:41.978365 2408 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7d15cf26fd49605570c9da6593e6a5ce-ca-certs\") pod \"kube-apiserver-172-239-60-160\" (UID: \"7d15cf26fd49605570c9da6593e6a5ce\") " pod="kube-system/kube-apiserver-172-239-60-160" Nov 5 15:47:41.978896 kubelet[2408]: I1105 15:47:41.978400 2408 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1f65bdfde111218b6439edb2b14253bb-flexvolume-dir\") pod \"kube-controller-manager-172-239-60-160\" (UID: \"1f65bdfde111218b6439edb2b14253bb\") " pod="kube-system/kube-controller-manager-172-239-60-160" Nov 5 15:47:41.978896 kubelet[2408]: I1105 15:47:41.978414 2408 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1f65bdfde111218b6439edb2b14253bb-k8s-certs\") pod \"kube-controller-manager-172-239-60-160\" (UID: \"1f65bdfde111218b6439edb2b14253bb\") " pod="kube-system/kube-controller-manager-172-239-60-160" Nov 5 15:47:41.978896 kubelet[2408]: I1105 15:47:41.978428 2408 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1f65bdfde111218b6439edb2b14253bb-usr-share-ca-certificates\") pod \"kube-controller-manager-172-239-60-160\" (UID: \"1f65bdfde111218b6439edb2b14253bb\") " pod="kube-system/kube-controller-manager-172-239-60-160" Nov 5 15:47:41.978896 kubelet[2408]: I1105 15:47:41.978441 2408 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7d15cf26fd49605570c9da6593e6a5ce-k8s-certs\") pod \"kube-apiserver-172-239-60-160\" (UID: \"7d15cf26fd49605570c9da6593e6a5ce\") " pod="kube-system/kube-apiserver-172-239-60-160" Nov 5 15:47:41.978896 kubelet[2408]: I1105 15:47:41.978471 2408 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7d15cf26fd49605570c9da6593e6a5ce-usr-share-ca-certificates\") pod \"kube-apiserver-172-239-60-160\" (UID: \"7d15cf26fd49605570c9da6593e6a5ce\") " pod="kube-system/kube-apiserver-172-239-60-160" Nov 5 15:47:42.061077 kubelet[2408]: I1105 15:47:42.061041 2408 kubelet_node_status.go:75] "Attempting to register node" node="172-239-60-160" Nov 5 15:47:42.061328 kubelet[2408]: E1105 15:47:42.061308 2408 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.60.160:6443/api/v1/nodes\": dial tcp 172.239.60.160:6443: connect: connection refused" node="172-239-60-160" Nov 5 15:47:42.128173 kubelet[2408]: E1105 15:47:42.128087 2408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:47:42.128860 containerd[1603]: time="2025-11-05T15:47:42.128809871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-239-60-160,Uid:9bd896d75e2152c6db83739b057ab1a0,Namespace:kube-system,Attempt:0,}" Nov 5 15:47:42.139276 kubelet[2408]: E1105 15:47:42.139062 2408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:47:42.139717 containerd[1603]: time="2025-11-05T15:47:42.139513651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-239-60-160,Uid:7d15cf26fd49605570c9da6593e6a5ce,Namespace:kube-system,Attempt:0,}" Nov 5 15:47:42.143956 kubelet[2408]: E1105 15:47:42.143931 2408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:47:42.158600 containerd[1603]: time="2025-11-05T15:47:42.158079832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-239-60-160,Uid:1f65bdfde111218b6439edb2b14253bb,Namespace:kube-system,Attempt:0,}" Nov 5 15:47:42.162290 containerd[1603]: time="2025-11-05T15:47:42.162264968Z" level=info msg="connecting to shim 39d5dc3eb3086b1826ad0d7bd9ac833c3c29133d304ed9eab078027c13856f6c" address="unix:///run/containerd/s/38b223c04b7faf146ab7a921839765145ad8755cb18465160466fcc00e01c07c" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:47:42.179548 containerd[1603]: time="2025-11-05T15:47:42.179524181Z" level=info msg="connecting to shim 661fe06dfdb51a1a71fdcdde7a9c66629cc66278cba7966b858e7e9459686512" address="unix:///run/containerd/s/3f36ec2e09fdb8f501eadc073a6cf315ac393022d5ce56f01799014de38eced3" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:47:42.194078 containerd[1603]: time="2025-11-05T15:47:42.194028956Z" level=info msg="connecting to shim 3b43f74326b5a2d8a01423bc1968e13a267c5cf4404e88fa600880cfbc0c25bb" address="unix:///run/containerd/s/4ee7610e5ffb6ee6b3474ef5ab3ab8e9f149b60597fb775e39feba798b650e5c" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:47:42.214742 systemd[1]: Started cri-containerd-661fe06dfdb51a1a71fdcdde7a9c66629cc66278cba7966b858e7e9459686512.scope - libcontainer container 661fe06dfdb51a1a71fdcdde7a9c66629cc66278cba7966b858e7e9459686512. Nov 5 15:47:42.218744 systemd[1]: Started cri-containerd-39d5dc3eb3086b1826ad0d7bd9ac833c3c29133d304ed9eab078027c13856f6c.scope - libcontainer container 39d5dc3eb3086b1826ad0d7bd9ac833c3c29133d304ed9eab078027c13856f6c. Nov 5 15:47:42.235723 systemd[1]: Started cri-containerd-3b43f74326b5a2d8a01423bc1968e13a267c5cf4404e88fa600880cfbc0c25bb.scope - libcontainer container 3b43f74326b5a2d8a01423bc1968e13a267c5cf4404e88fa600880cfbc0c25bb. Nov 5 15:47:42.279083 kubelet[2408]: E1105 15:47:42.279032 2408 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.60.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-60-160?timeout=10s\": dial tcp 172.239.60.160:6443: connect: connection refused" interval="800ms" Nov 5 15:47:42.301875 containerd[1603]: time="2025-11-05T15:47:42.300232740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-239-60-160,Uid:1f65bdfde111218b6439edb2b14253bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b43f74326b5a2d8a01423bc1968e13a267c5cf4404e88fa600880cfbc0c25bb\"" Nov 5 15:47:42.302943 kubelet[2408]: E1105 15:47:42.302630 2408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:47:42.308735 containerd[1603]: time="2025-11-05T15:47:42.308712081Z" level=info msg="CreateContainer within sandbox \"3b43f74326b5a2d8a01423bc1968e13a267c5cf4404e88fa600880cfbc0c25bb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 15:47:42.316965 containerd[1603]: time="2025-11-05T15:47:42.316914613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-239-60-160,Uid:7d15cf26fd49605570c9da6593e6a5ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"661fe06dfdb51a1a71fdcdde7a9c66629cc66278cba7966b858e7e9459686512\"" Nov 5 15:47:42.320722 kubelet[2408]: E1105 15:47:42.320702 2408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:47:42.322451 containerd[1603]: time="2025-11-05T15:47:42.322427968Z" level=info msg="CreateContainer within sandbox \"661fe06dfdb51a1a71fdcdde7a9c66629cc66278cba7966b858e7e9459686512\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 15:47:42.324021 containerd[1603]: time="2025-11-05T15:47:42.323971596Z" level=info msg="Container 980f19301508e693c7563ae341e0b30a39d0d0c22234a602e9c42731541a3aa5: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:47:42.328273 containerd[1603]: time="2025-11-05T15:47:42.327788432Z" level=info msg="Container 6c0e9b1ae26d5a2a060f2cf529d7bf25d375361cf9bce35c67bd497c216a302b: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:47:42.335309 containerd[1603]: time="2025-11-05T15:47:42.335263745Z" level=info msg="CreateContainer within sandbox \"3b43f74326b5a2d8a01423bc1968e13a267c5cf4404e88fa600880cfbc0c25bb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"980f19301508e693c7563ae341e0b30a39d0d0c22234a602e9c42731541a3aa5\"" Nov 5 15:47:42.337507 containerd[1603]: time="2025-11-05T15:47:42.337295473Z" level=info msg="CreateContainer within sandbox \"661fe06dfdb51a1a71fdcdde7a9c66629cc66278cba7966b858e7e9459686512\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6c0e9b1ae26d5a2a060f2cf529d7bf25d375361cf9bce35c67bd497c216a302b\"" Nov 5 15:47:42.337984 containerd[1603]: time="2025-11-05T15:47:42.337965842Z" level=info msg="StartContainer for \"6c0e9b1ae26d5a2a060f2cf529d7bf25d375361cf9bce35c67bd497c216a302b\"" Nov 5 15:47:42.339126 containerd[1603]: time="2025-11-05T15:47:42.339105661Z" level=info msg="connecting to shim 6c0e9b1ae26d5a2a060f2cf529d7bf25d375361cf9bce35c67bd497c216a302b" address="unix:///run/containerd/s/3f36ec2e09fdb8f501eadc073a6cf315ac393022d5ce56f01799014de38eced3" protocol=ttrpc version=3 Nov 5 15:47:42.340636 containerd[1603]: time="2025-11-05T15:47:42.340565020Z" level=info msg="StartContainer for \"980f19301508e693c7563ae341e0b30a39d0d0c22234a602e9c42731541a3aa5\"" Nov 5 15:47:42.342073 containerd[1603]: time="2025-11-05T15:47:42.342052518Z" level=info msg="connecting to shim 980f19301508e693c7563ae341e0b30a39d0d0c22234a602e9c42731541a3aa5" address="unix:///run/containerd/s/4ee7610e5ffb6ee6b3474ef5ab3ab8e9f149b60597fb775e39feba798b650e5c" protocol=ttrpc version=3 Nov 5 15:47:42.367334 containerd[1603]: time="2025-11-05T15:47:42.367264493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-239-60-160,Uid:9bd896d75e2152c6db83739b057ab1a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"39d5dc3eb3086b1826ad0d7bd9ac833c3c29133d304ed9eab078027c13856f6c\"" Nov 5 15:47:42.367710 systemd[1]: Started cri-containerd-980f19301508e693c7563ae341e0b30a39d0d0c22234a602e9c42731541a3aa5.scope - libcontainer container 980f19301508e693c7563ae341e0b30a39d0d0c22234a602e9c42731541a3aa5. Nov 5 15:47:42.369908 kubelet[2408]: E1105 15:47:42.369866 2408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:47:42.375499 containerd[1603]: time="2025-11-05T15:47:42.375415685Z" level=info msg="CreateContainer within sandbox \"39d5dc3eb3086b1826ad0d7bd9ac833c3c29133d304ed9eab078027c13856f6c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 15:47:42.379712 systemd[1]: Started cri-containerd-6c0e9b1ae26d5a2a060f2cf529d7bf25d375361cf9bce35c67bd497c216a302b.scope - libcontainer container 6c0e9b1ae26d5a2a060f2cf529d7bf25d375361cf9bce35c67bd497c216a302b. Nov 5 15:47:42.395178 containerd[1603]: time="2025-11-05T15:47:42.395156445Z" level=info msg="Container 2b15be901e3d95cbd2c733e38b61e119c8b059c849c2818cc5547c9f94b911bb: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:47:42.399159 containerd[1603]: time="2025-11-05T15:47:42.399121681Z" level=info msg="CreateContainer within sandbox \"39d5dc3eb3086b1826ad0d7bd9ac833c3c29133d304ed9eab078027c13856f6c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2b15be901e3d95cbd2c733e38b61e119c8b059c849c2818cc5547c9f94b911bb\"" Nov 5 15:47:42.399974 containerd[1603]: time="2025-11-05T15:47:42.399957740Z" level=info msg="StartContainer for \"2b15be901e3d95cbd2c733e38b61e119c8b059c849c2818cc5547c9f94b911bb\"" Nov 5 15:47:42.401365 containerd[1603]: time="2025-11-05T15:47:42.401332559Z" level=info msg="connecting to shim 2b15be901e3d95cbd2c733e38b61e119c8b059c849c2818cc5547c9f94b911bb" address="unix:///run/containerd/s/38b223c04b7faf146ab7a921839765145ad8755cb18465160466fcc00e01c07c" protocol=ttrpc version=3 Nov 5 15:47:42.432011 systemd[1]: Started cri-containerd-2b15be901e3d95cbd2c733e38b61e119c8b059c849c2818cc5547c9f94b911bb.scope - libcontainer container 2b15be901e3d95cbd2c733e38b61e119c8b059c849c2818cc5547c9f94b911bb. Nov 5 15:47:42.464776 kubelet[2408]: I1105 15:47:42.464741 2408 kubelet_node_status.go:75] "Attempting to register node" node="172-239-60-160" Nov 5 15:47:42.465062 kubelet[2408]: E1105 15:47:42.465030 2408 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.60.160:6443/api/v1/nodes\": dial tcp 172.239.60.160:6443: connect: connection refused" node="172-239-60-160" Nov 5 15:47:42.492305 containerd[1603]: time="2025-11-05T15:47:42.492193148Z" level=info msg="StartContainer for \"980f19301508e693c7563ae341e0b30a39d0d0c22234a602e9c42731541a3aa5\" returns successfully" Nov 5 15:47:42.495638 containerd[1603]: time="2025-11-05T15:47:42.495581675Z" level=info msg="StartContainer for \"6c0e9b1ae26d5a2a060f2cf529d7bf25d375361cf9bce35c67bd497c216a302b\" returns successfully" Nov 5 15:47:42.529746 containerd[1603]: time="2025-11-05T15:47:42.529724790Z" level=info msg="StartContainer for \"2b15be901e3d95cbd2c733e38b61e119c8b059c849c2818cc5547c9f94b911bb\" returns successfully" Nov 5 15:47:42.531347 kubelet[2408]: W1105 15:47:42.531289 2408 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.239.60.160:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-239-60-160&limit=500&resourceVersion=0": dial tcp 172.239.60.160:6443: connect: connection refused Nov 5 15:47:42.531401 kubelet[2408]: E1105 15:47:42.531354 2408 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.239.60.160:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-239-60-160&limit=500&resourceVersion=0\": dial tcp 172.239.60.160:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:47:42.714497 kubelet[2408]: E1105 15:47:42.714377 2408 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-60-160\" not found" node="172-239-60-160" Nov 5 15:47:42.714633 kubelet[2408]: E1105 15:47:42.714538 2408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:47:42.719064 kubelet[2408]: E1105 15:47:42.719039 2408 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-60-160\" not found" node="172-239-60-160" Nov 5 15:47:42.719163 kubelet[2408]: E1105 15:47:42.719141 2408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:47:42.720416 kubelet[2408]: E1105 15:47:42.720393 2408 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-60-160\" not found" node="172-239-60-160" Nov 5 15:47:42.720512 kubelet[2408]: E1105 15:47:42.720476 2408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:47:43.273554 kubelet[2408]: I1105 15:47:43.271974 2408 kubelet_node_status.go:75] "Attempting to register node" node="172-239-60-160" Nov 5 15:47:43.723742 kubelet[2408]: E1105 15:47:43.723645 2408 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-60-160\" not found" node="172-239-60-160" Nov 5 15:47:43.724094 kubelet[2408]: E1105 15:47:43.723762 2408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:47:43.725659 kubelet[2408]: E1105 15:47:43.725633 2408 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-60-160\" not found" node="172-239-60-160" Nov 5 15:47:43.725753 kubelet[2408]: E1105 15:47:43.725730 2408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:47:43.727210 kubelet[2408]: E1105 15:47:43.727106 2408 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-60-160\" not found" node="172-239-60-160" Nov 5 15:47:43.727210 kubelet[2408]: E1105 15:47:43.727199 2408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:47:44.050017 kubelet[2408]: E1105 15:47:44.049477 2408 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-239-60-160\" not found" node="172-239-60-160" Nov 5 15:47:44.123992 kubelet[2408]: I1105 15:47:44.123956 2408 kubelet_node_status.go:78] "Successfully registered node" node="172-239-60-160" Nov 5 15:47:44.123992 kubelet[2408]: E1105 15:47:44.123986 2408 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-239-60-160\": node \"172-239-60-160\" not found" Nov 5 15:47:44.139642 kubelet[2408]: E1105 15:47:44.139609 2408 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-239-60-160\" not found" Nov 5 15:47:44.240771 kubelet[2408]: E1105 15:47:44.240731 2408 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-239-60-160\" not found" Nov 5 15:47:44.372982 kubelet[2408]: I1105 15:47:44.372421 2408 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-60-160" Nov 5 15:47:44.382390 kubelet[2408]: E1105 15:47:44.382294 2408 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-239-60-160\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-239-60-160" Nov 5 15:47:44.382390 kubelet[2408]: I1105 15:47:44.382356 2408 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-60-160" Nov 5 15:47:44.385063 kubelet[2408]: E1105 15:47:44.385022 2408 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-239-60-160\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-239-60-160" Nov 5 15:47:44.385063 kubelet[2408]: I1105 15:47:44.385045 2408 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-60-160" Nov 5 15:47:44.387130 kubelet[2408]: E1105 15:47:44.387057 2408 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-60-160\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-239-60-160" Nov 5 15:47:44.650323 kubelet[2408]: I1105 15:47:44.650172 2408 apiserver.go:52] "Watching apiserver" Nov 5 15:47:44.676171 kubelet[2408]: I1105 15:47:44.676132 2408 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 15:47:46.262412 systemd[1]: Reload requested from client PID 2684 ('systemctl') (unit session-7.scope)... Nov 5 15:47:46.262437 systemd[1]: Reloading... Nov 5 15:47:46.352548 zram_generator::config[2735]: No configuration found. Nov 5 15:47:46.564735 systemd[1]: Reloading finished in 301 ms. Nov 5 15:47:46.597524 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:47:46.609057 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 15:47:46.609346 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:47:46.609402 systemd[1]: kubelet.service: Consumed 732ms CPU time, 133.7M memory peak. Nov 5 15:47:46.611294 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:47:46.791503 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:47:46.802796 (kubelet)[2780]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 15:47:46.835115 kubelet[2780]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:47:46.835115 kubelet[2780]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 15:47:46.835115 kubelet[2780]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:47:46.835509 kubelet[2780]: I1105 15:47:46.835145 2780 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 15:47:46.844667 kubelet[2780]: I1105 15:47:46.844404 2780 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 5 15:47:46.844667 kubelet[2780]: I1105 15:47:46.844433 2780 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 15:47:46.845323 kubelet[2780]: I1105 15:47:46.845288 2780 server.go:954] "Client rotation is on, will bootstrap in background" Nov 5 15:47:46.847024 kubelet[2780]: I1105 15:47:46.847009 2780 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 5 15:47:46.849535 kubelet[2780]: I1105 15:47:46.849517 2780 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 15:47:46.853812 kubelet[2780]: I1105 15:47:46.853785 2780 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 15:47:46.858006 kubelet[2780]: I1105 15:47:46.857975 2780 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 15:47:46.858272 kubelet[2780]: I1105 15:47:46.858231 2780 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 15:47:46.858397 kubelet[2780]: I1105 15:47:46.858260 2780 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-239-60-160","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 15:47:46.858397 kubelet[2780]: I1105 15:47:46.858395 2780 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 15:47:46.858525 kubelet[2780]: I1105 15:47:46.858404 2780 container_manager_linux.go:304] "Creating device plugin manager" Nov 5 15:47:46.858525 kubelet[2780]: I1105 15:47:46.858447 2780 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:47:46.858643 kubelet[2780]: I1105 15:47:46.858623 2780 kubelet.go:446] "Attempting to sync node with API server" Nov 5 15:47:46.858675 kubelet[2780]: I1105 15:47:46.858649 2780 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 15:47:46.859520 kubelet[2780]: I1105 15:47:46.859317 2780 kubelet.go:352] "Adding apiserver pod source" Nov 5 15:47:46.859520 kubelet[2780]: I1105 15:47:46.859335 2780 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 15:47:46.861950 kubelet[2780]: I1105 15:47:46.861846 2780 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 15:47:46.863655 kubelet[2780]: I1105 15:47:46.862688 2780 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 5 15:47:46.865591 kubelet[2780]: I1105 15:47:46.865202 2780 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 15:47:46.865698 kubelet[2780]: I1105 15:47:46.865687 2780 server.go:1287] "Started kubelet" Nov 5 15:47:46.870412 kubelet[2780]: I1105 15:47:46.870371 2780 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 15:47:46.871300 kubelet[2780]: I1105 15:47:46.871282 2780 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 15:47:46.872861 kubelet[2780]: I1105 15:47:46.872168 2780 server.go:479] "Adding debug handlers to kubelet server" Nov 5 15:47:46.873276 kubelet[2780]: I1105 15:47:46.873032 2780 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 15:47:46.873504 kubelet[2780]: I1105 15:47:46.873461 2780 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 15:47:46.885112 kubelet[2780]: I1105 15:47:46.885093 2780 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 15:47:46.886908 kubelet[2780]: I1105 15:47:46.886896 2780 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 15:47:46.888563 kubelet[2780]: I1105 15:47:46.888551 2780 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 15:47:46.888830 kubelet[2780]: I1105 15:47:46.888799 2780 reconciler.go:26] "Reconciler: start to sync state" Nov 5 15:47:46.889421 kubelet[2780]: E1105 15:47:46.889406 2780 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 15:47:46.889785 kubelet[2780]: I1105 15:47:46.889772 2780 factory.go:221] Registration of the systemd container factory successfully Nov 5 15:47:46.889954 kubelet[2780]: I1105 15:47:46.889939 2780 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 15:47:46.893424 kubelet[2780]: I1105 15:47:46.893411 2780 factory.go:221] Registration of the containerd container factory successfully Nov 5 15:47:46.895023 kubelet[2780]: I1105 15:47:46.894989 2780 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 5 15:47:46.896586 kubelet[2780]: I1105 15:47:46.896561 2780 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 5 15:47:46.896629 kubelet[2780]: I1105 15:47:46.896589 2780 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 5 15:47:46.896629 kubelet[2780]: I1105 15:47:46.896608 2780 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 15:47:46.896629 kubelet[2780]: I1105 15:47:46.896615 2780 kubelet.go:2382] "Starting kubelet main sync loop" Nov 5 15:47:46.896704 kubelet[2780]: E1105 15:47:46.896660 2780 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 15:47:46.953212 kubelet[2780]: I1105 15:47:46.953177 2780 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 15:47:46.953212 kubelet[2780]: I1105 15:47:46.953196 2780 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 15:47:46.953334 kubelet[2780]: I1105 15:47:46.953315 2780 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:47:46.953632 kubelet[2780]: I1105 15:47:46.953603 2780 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 15:47:46.953632 kubelet[2780]: I1105 15:47:46.953619 2780 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 15:47:46.953632 kubelet[2780]: I1105 15:47:46.953635 2780 policy_none.go:49] "None policy: Start" Nov 5 15:47:46.953720 kubelet[2780]: I1105 15:47:46.953649 2780 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 15:47:46.953720 kubelet[2780]: I1105 15:47:46.953660 2780 state_mem.go:35] "Initializing new in-memory state store" Nov 5 15:47:46.953789 kubelet[2780]: I1105 15:47:46.953770 2780 state_mem.go:75] "Updated machine memory state" Nov 5 15:47:46.959679 kubelet[2780]: I1105 15:47:46.959619 2780 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 5 15:47:46.960407 kubelet[2780]: I1105 15:47:46.960394 2780 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 15:47:46.960530 kubelet[2780]: I1105 15:47:46.960461 2780 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 15:47:46.960910 kubelet[2780]: I1105 15:47:46.960823 2780 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 15:47:46.966982 kubelet[2780]: E1105 15:47:46.966956 2780 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 15:47:46.997462 kubelet[2780]: I1105 15:47:46.997440 2780 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-60-160" Nov 5 15:47:46.997754 kubelet[2780]: I1105 15:47:46.997726 2780 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-60-160" Nov 5 15:47:46.999361 kubelet[2780]: I1105 15:47:46.999243 2780 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-60-160" Nov 5 15:47:47.071989 kubelet[2780]: I1105 15:47:47.071963 2780 kubelet_node_status.go:75] "Attempting to register node" node="172-239-60-160" Nov 5 15:47:47.077973 kubelet[2780]: I1105 15:47:47.077936 2780 kubelet_node_status.go:124] "Node was previously registered" node="172-239-60-160" Nov 5 15:47:47.078092 kubelet[2780]: I1105 15:47:47.078001 2780 kubelet_node_status.go:78] "Successfully registered node" node="172-239-60-160" Nov 5 15:47:47.190689 kubelet[2780]: I1105 15:47:47.190457 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7d15cf26fd49605570c9da6593e6a5ce-ca-certs\") pod \"kube-apiserver-172-239-60-160\" (UID: \"7d15cf26fd49605570c9da6593e6a5ce\") " pod="kube-system/kube-apiserver-172-239-60-160" Nov 5 15:47:47.190689 kubelet[2780]: I1105 15:47:47.190505 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7d15cf26fd49605570c9da6593e6a5ce-usr-share-ca-certificates\") pod \"kube-apiserver-172-239-60-160\" (UID: \"7d15cf26fd49605570c9da6593e6a5ce\") " pod="kube-system/kube-apiserver-172-239-60-160" Nov 5 15:47:47.190689 kubelet[2780]: I1105 15:47:47.190526 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1f65bdfde111218b6439edb2b14253bb-ca-certs\") pod \"kube-controller-manager-172-239-60-160\" (UID: \"1f65bdfde111218b6439edb2b14253bb\") " pod="kube-system/kube-controller-manager-172-239-60-160" Nov 5 15:47:47.190689 kubelet[2780]: I1105 15:47:47.190541 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1f65bdfde111218b6439edb2b14253bb-flexvolume-dir\") pod \"kube-controller-manager-172-239-60-160\" (UID: \"1f65bdfde111218b6439edb2b14253bb\") " pod="kube-system/kube-controller-manager-172-239-60-160" Nov 5 15:47:47.190689 kubelet[2780]: I1105 15:47:47.190556 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1f65bdfde111218b6439edb2b14253bb-k8s-certs\") pod \"kube-controller-manager-172-239-60-160\" (UID: \"1f65bdfde111218b6439edb2b14253bb\") " pod="kube-system/kube-controller-manager-172-239-60-160" Nov 5 15:47:47.190878 kubelet[2780]: I1105 15:47:47.190570 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1f65bdfde111218b6439edb2b14253bb-usr-share-ca-certificates\") pod \"kube-controller-manager-172-239-60-160\" (UID: \"1f65bdfde111218b6439edb2b14253bb\") " pod="kube-system/kube-controller-manager-172-239-60-160" Nov 5 15:47:47.190878 kubelet[2780]: I1105 15:47:47.190585 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7d15cf26fd49605570c9da6593e6a5ce-k8s-certs\") pod \"kube-apiserver-172-239-60-160\" (UID: \"7d15cf26fd49605570c9da6593e6a5ce\") " pod="kube-system/kube-apiserver-172-239-60-160" Nov 5 15:47:47.190878 kubelet[2780]: I1105 15:47:47.190599 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1f65bdfde111218b6439edb2b14253bb-kubeconfig\") pod \"kube-controller-manager-172-239-60-160\" (UID: \"1f65bdfde111218b6439edb2b14253bb\") " pod="kube-system/kube-controller-manager-172-239-60-160" Nov 5 15:47:47.190878 kubelet[2780]: I1105 15:47:47.190624 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9bd896d75e2152c6db83739b057ab1a0-kubeconfig\") pod \"kube-scheduler-172-239-60-160\" (UID: \"9bd896d75e2152c6db83739b057ab1a0\") " pod="kube-system/kube-scheduler-172-239-60-160" Nov 5 15:47:47.303589 kubelet[2780]: E1105 15:47:47.302951 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:47:47.303589 kubelet[2780]: E1105 15:47:47.303086 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:47:47.303589 kubelet[2780]: E1105 15:47:47.303111 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:47:47.861357 kubelet[2780]: I1105 15:47:47.861169 2780 apiserver.go:52] "Watching apiserver" Nov 5 15:47:47.889746 kubelet[2780]: I1105 15:47:47.889702 2780 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 15:47:47.928252 kubelet[2780]: E1105 15:47:47.925948 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:47:47.928252 kubelet[2780]: I1105 15:47:47.926111 2780 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-60-160" Nov 5 15:47:47.928252 kubelet[2780]: E1105 15:47:47.926464 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:47:47.934097 kubelet[2780]: E1105 15:47:47.934067 2780 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-60-160\" already exists" pod="kube-system/kube-apiserver-172-239-60-160" Nov 5 15:47:47.934324 kubelet[2780]: E1105 15:47:47.934310 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:47:47.956871 kubelet[2780]: I1105 15:47:47.956822 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-239-60-160" podStartSLOduration=1.956808463 podStartE2EDuration="1.956808463s" podCreationTimestamp="2025-11-05 15:47:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:47:47.956522614 +0000 UTC m=+1.149992611" watchObservedRunningTime="2025-11-05 15:47:47.956808463 +0000 UTC m=+1.150278460" Nov 5 15:47:47.975347 kubelet[2780]: I1105 15:47:47.975290 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-239-60-160" podStartSLOduration=1.975276925 podStartE2EDuration="1.975276925s" podCreationTimestamp="2025-11-05 15:47:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:47:47.965426795 +0000 UTC m=+1.158896792" watchObservedRunningTime="2025-11-05 15:47:47.975276925 +0000 UTC m=+1.168746922" Nov 5 15:47:48.849899 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 5 15:47:48.928265 kubelet[2780]: E1105 15:47:48.927807 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:47:48.928265 kubelet[2780]: E1105 15:47:48.927855 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:47:52.242283 kubelet[2780]: I1105 15:47:52.242253 2780 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 15:47:52.243432 kubelet[2780]: I1105 15:47:52.242994 2780 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 15:47:52.243464 containerd[1603]: time="2025-11-05T15:47:52.242577428Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 15:47:53.007517 kubelet[2780]: I1105 15:47:53.005841 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-239-60-160" podStartSLOduration=6.005826524 podStartE2EDuration="6.005826524s" podCreationTimestamp="2025-11-05 15:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:47:47.975782744 +0000 UTC m=+1.169252741" watchObservedRunningTime="2025-11-05 15:47:53.005826524 +0000 UTC m=+6.199296521" Nov 5 15:47:53.016627 systemd[1]: Created slice kubepods-besteffort-pod09e5bac4_4365_4826_9eda_cae3b90d9c79.slice - libcontainer container kubepods-besteffort-pod09e5bac4_4365_4826_9eda_cae3b90d9c79.slice. Nov 5 15:47:53.029033 kubelet[2780]: I1105 15:47:53.029003 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/09e5bac4-4365-4826-9eda-cae3b90d9c79-kube-proxy\") pod \"kube-proxy-fsd4q\" (UID: \"09e5bac4-4365-4826-9eda-cae3b90d9c79\") " pod="kube-system/kube-proxy-fsd4q" Nov 5 15:47:53.029138 kubelet[2780]: I1105 15:47:53.029038 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09e5bac4-4365-4826-9eda-cae3b90d9c79-lib-modules\") pod \"kube-proxy-fsd4q\" (UID: \"09e5bac4-4365-4826-9eda-cae3b90d9c79\") " pod="kube-system/kube-proxy-fsd4q" Nov 5 15:47:53.029138 kubelet[2780]: I1105 15:47:53.029080 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09e5bac4-4365-4826-9eda-cae3b90d9c79-xtables-lock\") pod \"kube-proxy-fsd4q\" (UID: \"09e5bac4-4365-4826-9eda-cae3b90d9c79\") " pod="kube-system/kube-proxy-fsd4q" Nov 5 15:47:53.029138 kubelet[2780]: I1105 15:47:53.029097 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9bf2\" (UniqueName: \"kubernetes.io/projected/09e5bac4-4365-4826-9eda-cae3b90d9c79-kube-api-access-p9bf2\") pod \"kube-proxy-fsd4q\" (UID: \"09e5bac4-4365-4826-9eda-cae3b90d9c79\") " pod="kube-system/kube-proxy-fsd4q" Nov 5 15:47:53.321867 kubelet[2780]: W1105 15:47:53.321775 2780 reflector.go:569] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:172-239-60-160" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node '172-239-60-160' and this object Nov 5 15:47:53.322572 kubelet[2780]: E1105 15:47:53.322545 2780 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:172-239-60-160\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node '172-239-60-160' and this object" logger="UnhandledError" Nov 5 15:47:53.322640 kubelet[2780]: I1105 15:47:53.322455 2780 status_manager.go:890] "Failed to get status for pod" podUID="6fd4a451-d759-4a43-8cd1-7b5b102ad39a" pod="tigera-operator/tigera-operator-7dcd859c48-vdh8d" err="pods \"tigera-operator-7dcd859c48-vdh8d\" is forbidden: User \"system:node:172-239-60-160\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node '172-239-60-160' and this object" Nov 5 15:47:53.322695 kubelet[2780]: W1105 15:47:53.322674 2780 reflector.go:569] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:172-239-60-160" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node '172-239-60-160' and this object Nov 5 15:47:53.323611 kubelet[2780]: E1105 15:47:53.323543 2780 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:172-239-60-160\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node '172-239-60-160' and this object" logger="UnhandledError" Nov 5 15:47:53.325705 kubelet[2780]: E1105 15:47:53.325687 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:47:53.326478 containerd[1603]: time="2025-11-05T15:47:53.326265084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fsd4q,Uid:09e5bac4-4365-4826-9eda-cae3b90d9c79,Namespace:kube-system,Attempt:0,}" Nov 5 15:47:53.326685 systemd[1]: Created slice kubepods-besteffort-pod6fd4a451_d759_4a43_8cd1_7b5b102ad39a.slice - libcontainer container kubepods-besteffort-pod6fd4a451_d759_4a43_8cd1_7b5b102ad39a.slice. Nov 5 15:47:53.330435 kubelet[2780]: I1105 15:47:53.330279 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6fd4a451-d759-4a43-8cd1-7b5b102ad39a-var-lib-calico\") pod \"tigera-operator-7dcd859c48-vdh8d\" (UID: \"6fd4a451-d759-4a43-8cd1-7b5b102ad39a\") " pod="tigera-operator/tigera-operator-7dcd859c48-vdh8d" Nov 5 15:47:53.330435 kubelet[2780]: I1105 15:47:53.330309 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5kl7\" (UniqueName: \"kubernetes.io/projected/6fd4a451-d759-4a43-8cd1-7b5b102ad39a-kube-api-access-q5kl7\") pod \"tigera-operator-7dcd859c48-vdh8d\" (UID: \"6fd4a451-d759-4a43-8cd1-7b5b102ad39a\") " pod="tigera-operator/tigera-operator-7dcd859c48-vdh8d" Nov 5 15:47:53.346533 containerd[1603]: time="2025-11-05T15:47:53.346463024Z" level=info msg="connecting to shim ce1d0a79d74a9a48a1cd00bc09e8e3d08a67d67c50cb209b24c0cbdbefedc387" address="unix:///run/containerd/s/7b19877e0e6841f00a7ebf3520d2bf0222aec6402085025a65e44bab191ba981" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:47:53.379645 systemd[1]: Started cri-containerd-ce1d0a79d74a9a48a1cd00bc09e8e3d08a67d67c50cb209b24c0cbdbefedc387.scope - libcontainer container ce1d0a79d74a9a48a1cd00bc09e8e3d08a67d67c50cb209b24c0cbdbefedc387. Nov 5 15:47:53.410174 containerd[1603]: time="2025-11-05T15:47:53.410131740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fsd4q,Uid:09e5bac4-4365-4826-9eda-cae3b90d9c79,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce1d0a79d74a9a48a1cd00bc09e8e3d08a67d67c50cb209b24c0cbdbefedc387\"" Nov 5 15:47:53.411005 kubelet[2780]: E1105 15:47:53.410986 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:47:53.414515 containerd[1603]: time="2025-11-05T15:47:53.414466426Z" level=info msg="CreateContainer within sandbox \"ce1d0a79d74a9a48a1cd00bc09e8e3d08a67d67c50cb209b24c0cbdbefedc387\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 15:47:53.424757 containerd[1603]: time="2025-11-05T15:47:53.424645255Z" level=info msg="Container 96aa1fb37bf835237aae7c807ff6322562f424a3d865a610f18b48a71fa3f8a1: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:47:53.427555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount300896736.mount: Deactivated successfully. Nov 5 15:47:53.436510 containerd[1603]: time="2025-11-05T15:47:53.436077314Z" level=info msg="CreateContainer within sandbox \"ce1d0a79d74a9a48a1cd00bc09e8e3d08a67d67c50cb209b24c0cbdbefedc387\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"96aa1fb37bf835237aae7c807ff6322562f424a3d865a610f18b48a71fa3f8a1\"" Nov 5 15:47:53.437258 containerd[1603]: time="2025-11-05T15:47:53.437232043Z" level=info msg="StartContainer for \"96aa1fb37bf835237aae7c807ff6322562f424a3d865a610f18b48a71fa3f8a1\"" Nov 5 15:47:53.438730 containerd[1603]: time="2025-11-05T15:47:53.438695471Z" level=info msg="connecting to shim 96aa1fb37bf835237aae7c807ff6322562f424a3d865a610f18b48a71fa3f8a1" address="unix:///run/containerd/s/7b19877e0e6841f00a7ebf3520d2bf0222aec6402085025a65e44bab191ba981" protocol=ttrpc version=3 Nov 5 15:47:53.458614 systemd[1]: Started cri-containerd-96aa1fb37bf835237aae7c807ff6322562f424a3d865a610f18b48a71fa3f8a1.scope - libcontainer container 96aa1fb37bf835237aae7c807ff6322562f424a3d865a610f18b48a71fa3f8a1. Nov 5 15:47:53.498003 containerd[1603]: time="2025-11-05T15:47:53.497977592Z" level=info msg="StartContainer for \"96aa1fb37bf835237aae7c807ff6322562f424a3d865a610f18b48a71fa3f8a1\" returns successfully" Nov 5 15:47:53.936685 kubelet[2780]: E1105 15:47:53.935593 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:47:54.437941 kubelet[2780]: E1105 15:47:54.437904 2780 projected.go:288] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 5 15:47:54.437941 kubelet[2780]: E1105 15:47:54.437933 2780 projected.go:194] Error preparing data for projected volume kube-api-access-q5kl7 for pod tigera-operator/tigera-operator-7dcd859c48-vdh8d: failed to sync configmap cache: timed out waiting for the condition Nov 5 15:47:54.438353 kubelet[2780]: E1105 15:47:54.437995 2780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6fd4a451-d759-4a43-8cd1-7b5b102ad39a-kube-api-access-q5kl7 podName:6fd4a451-d759-4a43-8cd1-7b5b102ad39a nodeName:}" failed. No retries permitted until 2025-11-05 15:47:54.937974372 +0000 UTC m=+8.131444369 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-q5kl7" (UniqueName: "kubernetes.io/projected/6fd4a451-d759-4a43-8cd1-7b5b102ad39a-kube-api-access-q5kl7") pod "tigera-operator-7dcd859c48-vdh8d" (UID: "6fd4a451-d759-4a43-8cd1-7b5b102ad39a") : failed to sync configmap cache: timed out waiting for the condition Nov 5 15:47:54.501073 kubelet[2780]: E1105 15:47:54.500757 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:47:54.511699 kubelet[2780]: I1105 15:47:54.511622 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fsd4q" podStartSLOduration=2.511608648 podStartE2EDuration="2.511608648s" podCreationTimestamp="2025-11-05 15:47:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:47:53.952868967 +0000 UTC m=+7.146338964" watchObservedRunningTime="2025-11-05 15:47:54.511608648 +0000 UTC m=+7.705078645" Nov 5 15:47:54.685159 kubelet[2780]: E1105 15:47:54.685126 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:47:54.937245 kubelet[2780]: E1105 15:47:54.937147 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:47:54.937804 kubelet[2780]: E1105 15:47:54.937780 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:47:55.132249 containerd[1603]: time="2025-11-05T15:47:55.132119568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-vdh8d,Uid:6fd4a451-d759-4a43-8cd1-7b5b102ad39a,Namespace:tigera-operator,Attempt:0,}" Nov 5 15:47:55.147897 containerd[1603]: time="2025-11-05T15:47:55.147863632Z" level=info msg="connecting to shim 6256e95f0d1b220f8f57fafb4ebd67cb09c4f282c09a8f6286a23bc105cb5ec0" address="unix:///run/containerd/s/f500e88fdda4d27a789573d815f9c65817fc35594b9176c36152693d56532f2c" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:47:55.174787 systemd[1]: Started cri-containerd-6256e95f0d1b220f8f57fafb4ebd67cb09c4f282c09a8f6286a23bc105cb5ec0.scope - libcontainer container 6256e95f0d1b220f8f57fafb4ebd67cb09c4f282c09a8f6286a23bc105cb5ec0. Nov 5 15:47:55.217680 containerd[1603]: time="2025-11-05T15:47:55.217481383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-vdh8d,Uid:6fd4a451-d759-4a43-8cd1-7b5b102ad39a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6256e95f0d1b220f8f57fafb4ebd67cb09c4f282c09a8f6286a23bc105cb5ec0\"" Nov 5 15:47:55.219190 containerd[1603]: time="2025-11-05T15:47:55.219164721Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 5 15:47:55.934059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2638388559.mount: Deactivated successfully. Nov 5 15:47:55.941246 kubelet[2780]: E1105 15:47:55.941167 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:47:57.560461 containerd[1603]: time="2025-11-05T15:47:57.560415690Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:47:57.561528 containerd[1603]: time="2025-11-05T15:47:57.561347399Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 5 15:47:57.562047 containerd[1603]: time="2025-11-05T15:47:57.562017958Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:47:57.563851 containerd[1603]: time="2025-11-05T15:47:57.563820146Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:47:57.564630 containerd[1603]: time="2025-11-05T15:47:57.564599955Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.345409014s" Nov 5 15:47:57.564764 containerd[1603]: time="2025-11-05T15:47:57.564748255Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 5 15:47:57.567379 containerd[1603]: time="2025-11-05T15:47:57.567338403Z" level=info msg="CreateContainer within sandbox \"6256e95f0d1b220f8f57fafb4ebd67cb09c4f282c09a8f6286a23bc105cb5ec0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 5 15:47:57.576881 containerd[1603]: time="2025-11-05T15:47:57.576848193Z" level=info msg="Container cf34425034b154d67b6e5279fc58e66790d51d55dfab3c7fbd6b4ec12754a184: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:47:57.582792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3606043624.mount: Deactivated successfully. Nov 5 15:47:57.584431 containerd[1603]: time="2025-11-05T15:47:57.584380816Z" level=info msg="CreateContainer within sandbox \"6256e95f0d1b220f8f57fafb4ebd67cb09c4f282c09a8f6286a23bc105cb5ec0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"cf34425034b154d67b6e5279fc58e66790d51d55dfab3c7fbd6b4ec12754a184\"" Nov 5 15:47:57.585208 containerd[1603]: time="2025-11-05T15:47:57.585180045Z" level=info msg="StartContainer for \"cf34425034b154d67b6e5279fc58e66790d51d55dfab3c7fbd6b4ec12754a184\"" Nov 5 15:47:57.587190 containerd[1603]: time="2025-11-05T15:47:57.587077103Z" level=info msg="connecting to shim cf34425034b154d67b6e5279fc58e66790d51d55dfab3c7fbd6b4ec12754a184" address="unix:///run/containerd/s/f500e88fdda4d27a789573d815f9c65817fc35594b9176c36152693d56532f2c" protocol=ttrpc version=3 Nov 5 15:47:57.619808 systemd[1]: Started cri-containerd-cf34425034b154d67b6e5279fc58e66790d51d55dfab3c7fbd6b4ec12754a184.scope - libcontainer container cf34425034b154d67b6e5279fc58e66790d51d55dfab3c7fbd6b4ec12754a184. Nov 5 15:47:57.655865 containerd[1603]: time="2025-11-05T15:47:57.655820064Z" level=info msg="StartContainer for \"cf34425034b154d67b6e5279fc58e66790d51d55dfab3c7fbd6b4ec12754a184\" returns successfully" Nov 5 15:47:57.837629 kubelet[2780]: E1105 15:47:57.837094 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:47:58.342794 systemd-resolved[1286]: Clock change detected. Flushing caches. Nov 5 15:47:58.343135 systemd-timesyncd[1516]: Contacted time server [2603:c020:0:8369::feeb:dab]:123 (2.flatcar.pool.ntp.org). Nov 5 15:47:58.343194 systemd-timesyncd[1516]: Initial clock synchronization to Wed 2025-11-05 15:47:58.342664 UTC. Nov 5 15:47:58.371627 kubelet[2780]: I1105 15:47:58.371534 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-vdh8d" podStartSLOduration=3.025010454 podStartE2EDuration="5.371508598s" podCreationTimestamp="2025-11-05 15:47:53 +0000 UTC" firstStartedPulling="2025-11-05 15:47:55.218779751 +0000 UTC m=+8.412249748" lastFinishedPulling="2025-11-05 15:47:57.565277895 +0000 UTC m=+10.758747892" observedRunningTime="2025-11-05 15:47:58.371089908 +0000 UTC m=+11.150324830" watchObservedRunningTime="2025-11-05 15:47:58.371508598 +0000 UTC m=+11.150743500" Nov 5 15:48:03.720472 update_engine[1581]: I20251105 15:48:03.720407 1581 update_attempter.cc:509] Updating boot flags... Nov 5 15:48:03.770564 sudo[1855]: pam_unix(sudo:session): session closed for user root Nov 5 15:48:03.831585 sshd[1854]: Connection closed by 139.178.89.65 port 51606 Nov 5 15:48:03.831886 sshd-session[1851]: pam_unix(sshd:session): session closed for user core Nov 5 15:48:03.862078 systemd[1]: sshd@6-172.239.60.160:22-139.178.89.65:51606.service: Deactivated successfully. Nov 5 15:48:03.868214 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 15:48:03.869053 systemd[1]: session-7.scope: Consumed 3.555s CPU time, 225.2M memory peak. Nov 5 15:48:03.899574 systemd-logind[1580]: Session 7 logged out. Waiting for processes to exit. Nov 5 15:48:03.919894 systemd-logind[1580]: Removed session 7. Nov 5 15:48:08.492838 systemd[1]: Created slice kubepods-besteffort-pode0ce1caa_4a51_4aa5_afc0_156eb40ce16a.slice - libcontainer container kubepods-besteffort-pode0ce1caa_4a51_4aa5_afc0_156eb40ce16a.slice. Nov 5 15:48:08.541574 kubelet[2780]: I1105 15:48:08.541524 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0ce1caa-4a51-4aa5-afc0-156eb40ce16a-tigera-ca-bundle\") pod \"calico-typha-745d744857-kc6v8\" (UID: \"e0ce1caa-4a51-4aa5-afc0-156eb40ce16a\") " pod="calico-system/calico-typha-745d744857-kc6v8" Nov 5 15:48:08.541574 kubelet[2780]: I1105 15:48:08.541573 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e0ce1caa-4a51-4aa5-afc0-156eb40ce16a-typha-certs\") pod \"calico-typha-745d744857-kc6v8\" (UID: \"e0ce1caa-4a51-4aa5-afc0-156eb40ce16a\") " pod="calico-system/calico-typha-745d744857-kc6v8" Nov 5 15:48:08.542292 kubelet[2780]: I1105 15:48:08.541594 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qktn\" (UniqueName: \"kubernetes.io/projected/e0ce1caa-4a51-4aa5-afc0-156eb40ce16a-kube-api-access-2qktn\") pod \"calico-typha-745d744857-kc6v8\" (UID: \"e0ce1caa-4a51-4aa5-afc0-156eb40ce16a\") " pod="calico-system/calico-typha-745d744857-kc6v8" Nov 5 15:48:08.687898 systemd[1]: Created slice kubepods-besteffort-poda43adf3c_1c45_4d00_978e_3ce9c3ba5e6c.slice - libcontainer container kubepods-besteffort-poda43adf3c_1c45_4d00_978e_3ce9c3ba5e6c.slice. Nov 5 15:48:08.743461 kubelet[2780]: I1105 15:48:08.743018 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a43adf3c-1c45-4d00-978e-3ce9c3ba5e6c-cni-bin-dir\") pod \"calico-node-r22v8\" (UID: \"a43adf3c-1c45-4d00-978e-3ce9c3ba5e6c\") " pod="calico-system/calico-node-r22v8" Nov 5 15:48:08.743461 kubelet[2780]: I1105 15:48:08.743076 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a43adf3c-1c45-4d00-978e-3ce9c3ba5e6c-var-lib-calico\") pod \"calico-node-r22v8\" (UID: \"a43adf3c-1c45-4d00-978e-3ce9c3ba5e6c\") " pod="calico-system/calico-node-r22v8" Nov 5 15:48:08.743461 kubelet[2780]: I1105 15:48:08.743095 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a43adf3c-1c45-4d00-978e-3ce9c3ba5e6c-xtables-lock\") pod \"calico-node-r22v8\" (UID: \"a43adf3c-1c45-4d00-978e-3ce9c3ba5e6c\") " pod="calico-system/calico-node-r22v8" Nov 5 15:48:08.743461 kubelet[2780]: I1105 15:48:08.743110 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsp7v\" (UniqueName: \"kubernetes.io/projected/a43adf3c-1c45-4d00-978e-3ce9c3ba5e6c-kube-api-access-wsp7v\") pod \"calico-node-r22v8\" (UID: \"a43adf3c-1c45-4d00-978e-3ce9c3ba5e6c\") " pod="calico-system/calico-node-r22v8" Nov 5 15:48:08.743461 kubelet[2780]: I1105 15:48:08.743148 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a43adf3c-1c45-4d00-978e-3ce9c3ba5e6c-node-certs\") pod \"calico-node-r22v8\" (UID: \"a43adf3c-1c45-4d00-978e-3ce9c3ba5e6c\") " pod="calico-system/calico-node-r22v8" Nov 5 15:48:08.743657 kubelet[2780]: I1105 15:48:08.743163 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a43adf3c-1c45-4d00-978e-3ce9c3ba5e6c-cni-log-dir\") pod \"calico-node-r22v8\" (UID: \"a43adf3c-1c45-4d00-978e-3ce9c3ba5e6c\") " pod="calico-system/calico-node-r22v8" Nov 5 15:48:08.743657 kubelet[2780]: I1105 15:48:08.743176 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a43adf3c-1c45-4d00-978e-3ce9c3ba5e6c-tigera-ca-bundle\") pod \"calico-node-r22v8\" (UID: \"a43adf3c-1c45-4d00-978e-3ce9c3ba5e6c\") " pod="calico-system/calico-node-r22v8" Nov 5 15:48:08.743657 kubelet[2780]: I1105 15:48:08.743189 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a43adf3c-1c45-4d00-978e-3ce9c3ba5e6c-policysync\") pod \"calico-node-r22v8\" (UID: \"a43adf3c-1c45-4d00-978e-3ce9c3ba5e6c\") " pod="calico-system/calico-node-r22v8" Nov 5 15:48:08.743657 kubelet[2780]: I1105 15:48:08.743239 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a43adf3c-1c45-4d00-978e-3ce9c3ba5e6c-cni-net-dir\") pod \"calico-node-r22v8\" (UID: \"a43adf3c-1c45-4d00-978e-3ce9c3ba5e6c\") " pod="calico-system/calico-node-r22v8" Nov 5 15:48:08.743657 kubelet[2780]: I1105 15:48:08.743263 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a43adf3c-1c45-4d00-978e-3ce9c3ba5e6c-lib-modules\") pod \"calico-node-r22v8\" (UID: \"a43adf3c-1c45-4d00-978e-3ce9c3ba5e6c\") " pod="calico-system/calico-node-r22v8" Nov 5 15:48:08.744805 kubelet[2780]: I1105 15:48:08.743310 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a43adf3c-1c45-4d00-978e-3ce9c3ba5e6c-flexvol-driver-host\") pod \"calico-node-r22v8\" (UID: \"a43adf3c-1c45-4d00-978e-3ce9c3ba5e6c\") " pod="calico-system/calico-node-r22v8" Nov 5 15:48:08.744805 kubelet[2780]: I1105 15:48:08.743323 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a43adf3c-1c45-4d00-978e-3ce9c3ba5e6c-var-run-calico\") pod \"calico-node-r22v8\" (UID: \"a43adf3c-1c45-4d00-978e-3ce9c3ba5e6c\") " pod="calico-system/calico-node-r22v8" Nov 5 15:48:08.803094 kubelet[2780]: E1105 15:48:08.802850 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:48:08.804139 containerd[1603]: time="2025-11-05T15:48:08.803445356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-745d744857-kc6v8,Uid:e0ce1caa-4a51-4aa5-afc0-156eb40ce16a,Namespace:calico-system,Attempt:0,}" Nov 5 15:48:08.827198 containerd[1603]: time="2025-11-05T15:48:08.826837622Z" level=info msg="connecting to shim 54779b638be13165c1218c387f32bf0720c3ba0823d62181106df456ca6a6e77" address="unix:///run/containerd/s/13b150baa813842b404e8304803a586a484de87e3c7c16b7caf119c74c1e4f19" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:48:08.852872 kubelet[2780]: E1105 15:48:08.852840 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.852872 kubelet[2780]: W1105 15:48:08.852867 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.852973 kubelet[2780]: E1105 15:48:08.852909 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.854838 kubelet[2780]: E1105 15:48:08.854806 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.854889 kubelet[2780]: W1105 15:48:08.854846 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.854889 kubelet[2780]: E1105 15:48:08.854868 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.855178 kubelet[2780]: E1105 15:48:08.855143 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.855178 kubelet[2780]: W1105 15:48:08.855161 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.855436 kubelet[2780]: E1105 15:48:08.855414 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.855546 kubelet[2780]: E1105 15:48:08.855524 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.855546 kubelet[2780]: W1105 15:48:08.855534 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.855796 kubelet[2780]: E1105 15:48:08.855773 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.855913 kubelet[2780]: E1105 15:48:08.855882 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.855913 kubelet[2780]: W1105 15:48:08.855891 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.856885 kubelet[2780]: E1105 15:48:08.856771 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.856885 kubelet[2780]: E1105 15:48:08.856863 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.856885 kubelet[2780]: W1105 15:48:08.856870 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.857317 kubelet[2780]: E1105 15:48:08.857305 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.857574 kubelet[2780]: E1105 15:48:08.857469 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.857574 kubelet[2780]: W1105 15:48:08.857549 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.857872 kubelet[2780]: E1105 15:48:08.857809 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.858434 kubelet[2780]: E1105 15:48:08.858352 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.858434 kubelet[2780]: W1105 15:48:08.858362 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.860221 kubelet[2780]: E1105 15:48:08.859999 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.860221 kubelet[2780]: W1105 15:48:08.860010 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.861323 kubelet[2780]: E1105 15:48:08.861240 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.861323 kubelet[2780]: W1105 15:48:08.861251 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.861942 kubelet[2780]: E1105 15:48:08.861930 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.862636 kubelet[2780]: W1105 15:48:08.862621 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.863753 kubelet[2780]: E1105 15:48:08.863571 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.864064 kubelet[2780]: E1105 15:48:08.861980 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.864064 kubelet[2780]: E1105 15:48:08.861975 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.864064 kubelet[2780]: E1105 15:48:08.861986 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.864422 kubelet[2780]: E1105 15:48:08.864396 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.864422 kubelet[2780]: W1105 15:48:08.864417 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.864486 kubelet[2780]: E1105 15:48:08.864471 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.865195 kubelet[2780]: E1105 15:48:08.864979 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.865195 kubelet[2780]: W1105 15:48:08.864994 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.865195 kubelet[2780]: E1105 15:48:08.865008 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.865417 kubelet[2780]: E1105 15:48:08.865383 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.865451 kubelet[2780]: W1105 15:48:08.865432 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.865480 kubelet[2780]: E1105 15:48:08.865457 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.866020 kubelet[2780]: E1105 15:48:08.865998 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.866081 kubelet[2780]: W1105 15:48:08.866038 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.866081 kubelet[2780]: E1105 15:48:08.866049 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.867278 kubelet[2780]: E1105 15:48:08.866948 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.867278 kubelet[2780]: W1105 15:48:08.867030 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.867278 kubelet[2780]: E1105 15:48:08.867141 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.871178 systemd[1]: Started cri-containerd-54779b638be13165c1218c387f32bf0720c3ba0823d62181106df456ca6a6e77.scope - libcontainer container 54779b638be13165c1218c387f32bf0720c3ba0823d62181106df456ca6a6e77. Nov 5 15:48:08.871774 kubelet[2780]: E1105 15:48:08.871648 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.871774 kubelet[2780]: W1105 15:48:08.871680 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.871774 kubelet[2780]: E1105 15:48:08.871699 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.872026 kubelet[2780]: E1105 15:48:08.872005 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.872026 kubelet[2780]: W1105 15:48:08.872020 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.872095 kubelet[2780]: E1105 15:48:08.872028 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.872285 kubelet[2780]: E1105 15:48:08.872252 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.872285 kubelet[2780]: W1105 15:48:08.872266 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.872285 kubelet[2780]: E1105 15:48:08.872275 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.872702 kubelet[2780]: E1105 15:48:08.872501 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.872702 kubelet[2780]: W1105 15:48:08.872513 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.872702 kubelet[2780]: E1105 15:48:08.872520 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.909000 kubelet[2780]: E1105 15:48:08.908961 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6slgh" podUID="175c15d8-2ca8-4a9b-b355-438a1e3fa9fd" Nov 5 15:48:08.936376 kubelet[2780]: E1105 15:48:08.936339 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.936453 kubelet[2780]: W1105 15:48:08.936370 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.936453 kubelet[2780]: E1105 15:48:08.936412 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.937052 kubelet[2780]: E1105 15:48:08.937028 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.937052 kubelet[2780]: W1105 15:48:08.937047 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.937141 kubelet[2780]: E1105 15:48:08.937059 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.938262 kubelet[2780]: E1105 15:48:08.938236 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.938262 kubelet[2780]: W1105 15:48:08.938257 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.938317 kubelet[2780]: E1105 15:48:08.938292 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.938646 kubelet[2780]: E1105 15:48:08.938619 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.938646 kubelet[2780]: W1105 15:48:08.938640 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.938709 kubelet[2780]: E1105 15:48:08.938650 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.939303 kubelet[2780]: E1105 15:48:08.939278 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.939356 kubelet[2780]: W1105 15:48:08.939335 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.939356 kubelet[2780]: E1105 15:48:08.939346 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.939704 kubelet[2780]: E1105 15:48:08.939667 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.940006 kubelet[2780]: W1105 15:48:08.939711 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.940006 kubelet[2780]: E1105 15:48:08.939752 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.940198 kubelet[2780]: E1105 15:48:08.940178 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.940239 kubelet[2780]: W1105 15:48:08.940192 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.940239 kubelet[2780]: E1105 15:48:08.940212 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.940469 kubelet[2780]: E1105 15:48:08.940447 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.940469 kubelet[2780]: W1105 15:48:08.940462 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.940540 kubelet[2780]: E1105 15:48:08.940490 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.940786 kubelet[2780]: E1105 15:48:08.940767 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.940786 kubelet[2780]: W1105 15:48:08.940782 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.940857 kubelet[2780]: E1105 15:48:08.940791 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.941022 kubelet[2780]: E1105 15:48:08.941003 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.941022 kubelet[2780]: W1105 15:48:08.941016 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.941092 kubelet[2780]: E1105 15:48:08.941045 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.941382 kubelet[2780]: E1105 15:48:08.941305 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.941382 kubelet[2780]: W1105 15:48:08.941319 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.941382 kubelet[2780]: E1105 15:48:08.941327 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.941617 kubelet[2780]: E1105 15:48:08.941570 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.941617 kubelet[2780]: W1105 15:48:08.941585 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.941617 kubelet[2780]: E1105 15:48:08.941593 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.941934 kubelet[2780]: E1105 15:48:08.941846 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.941934 kubelet[2780]: W1105 15:48:08.941861 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.941934 kubelet[2780]: E1105 15:48:08.941888 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.942077 kubelet[2780]: E1105 15:48:08.942056 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.942077 kubelet[2780]: W1105 15:48:08.942071 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.942146 kubelet[2780]: E1105 15:48:08.942081 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.942486 kubelet[2780]: E1105 15:48:08.942363 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.942486 kubelet[2780]: W1105 15:48:08.942378 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.942486 kubelet[2780]: E1105 15:48:08.942385 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.942579 kubelet[2780]: E1105 15:48:08.942557 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.942579 kubelet[2780]: W1105 15:48:08.942566 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.942579 kubelet[2780]: E1105 15:48:08.942573 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.942812 kubelet[2780]: E1105 15:48:08.942786 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.942812 kubelet[2780]: W1105 15:48:08.942801 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.942812 kubelet[2780]: E1105 15:48:08.942808 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.943021 kubelet[2780]: E1105 15:48:08.942996 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.943021 kubelet[2780]: W1105 15:48:08.943012 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.943021 kubelet[2780]: E1105 15:48:08.943021 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.943236 kubelet[2780]: E1105 15:48:08.943191 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.943236 kubelet[2780]: W1105 15:48:08.943208 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.943236 kubelet[2780]: E1105 15:48:08.943215 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.943413 kubelet[2780]: E1105 15:48:08.943381 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.943413 kubelet[2780]: W1105 15:48:08.943395 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.943413 kubelet[2780]: E1105 15:48:08.943403 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.945000 kubelet[2780]: E1105 15:48:08.944983 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.945000 kubelet[2780]: W1105 15:48:08.944995 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.945000 kubelet[2780]: E1105 15:48:08.945005 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.945226 kubelet[2780]: I1105 15:48:08.945051 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/175c15d8-2ca8-4a9b-b355-438a1e3fa9fd-varrun\") pod \"csi-node-driver-6slgh\" (UID: \"175c15d8-2ca8-4a9b-b355-438a1e3fa9fd\") " pod="calico-system/csi-node-driver-6slgh" Nov 5 15:48:08.945354 kubelet[2780]: E1105 15:48:08.945338 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.945354 kubelet[2780]: W1105 15:48:08.945351 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.945408 kubelet[2780]: E1105 15:48:08.945363 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.945408 kubelet[2780]: I1105 15:48:08.945378 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/175c15d8-2ca8-4a9b-b355-438a1e3fa9fd-kubelet-dir\") pod \"csi-node-driver-6slgh\" (UID: \"175c15d8-2ca8-4a9b-b355-438a1e3fa9fd\") " pod="calico-system/csi-node-driver-6slgh" Nov 5 15:48:08.945788 kubelet[2780]: E1105 15:48:08.945555 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.945788 kubelet[2780]: W1105 15:48:08.945564 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.945788 kubelet[2780]: E1105 15:48:08.945607 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.945788 kubelet[2780]: I1105 15:48:08.945619 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/175c15d8-2ca8-4a9b-b355-438a1e3fa9fd-registration-dir\") pod \"csi-node-driver-6slgh\" (UID: \"175c15d8-2ca8-4a9b-b355-438a1e3fa9fd\") " pod="calico-system/csi-node-driver-6slgh" Nov 5 15:48:08.945943 kubelet[2780]: E1105 15:48:08.945921 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.945943 kubelet[2780]: W1105 15:48:08.945937 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.946005 kubelet[2780]: E1105 15:48:08.945959 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.946005 kubelet[2780]: I1105 15:48:08.945973 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k87mz\" (UniqueName: \"kubernetes.io/projected/175c15d8-2ca8-4a9b-b355-438a1e3fa9fd-kube-api-access-k87mz\") pod \"csi-node-driver-6slgh\" (UID: \"175c15d8-2ca8-4a9b-b355-438a1e3fa9fd\") " pod="calico-system/csi-node-driver-6slgh" Nov 5 15:48:08.946248 kubelet[2780]: E1105 15:48:08.946192 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.946248 kubelet[2780]: W1105 15:48:08.946230 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.946248 kubelet[2780]: E1105 15:48:08.946248 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.946333 kubelet[2780]: I1105 15:48:08.946260 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/175c15d8-2ca8-4a9b-b355-438a1e3fa9fd-socket-dir\") pod \"csi-node-driver-6slgh\" (UID: \"175c15d8-2ca8-4a9b-b355-438a1e3fa9fd\") " pod="calico-system/csi-node-driver-6slgh" Nov 5 15:48:08.946534 kubelet[2780]: E1105 15:48:08.946513 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.946534 kubelet[2780]: W1105 15:48:08.946536 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.946534 kubelet[2780]: E1105 15:48:08.946564 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.946903 kubelet[2780]: E1105 15:48:08.946884 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.946903 kubelet[2780]: W1105 15:48:08.946895 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.947058 kubelet[2780]: E1105 15:48:08.947021 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.947223 kubelet[2780]: E1105 15:48:08.947203 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.947223 kubelet[2780]: W1105 15:48:08.947216 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.947352 kubelet[2780]: E1105 15:48:08.947287 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.947600 kubelet[2780]: E1105 15:48:08.947580 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.947600 kubelet[2780]: W1105 15:48:08.947594 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.948081 kubelet[2780]: E1105 15:48:08.948032 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.948081 kubelet[2780]: W1105 15:48:08.948044 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.948822 kubelet[2780]: E1105 15:48:08.948790 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.948822 kubelet[2780]: W1105 15:48:08.948802 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.949060 kubelet[2780]: E1105 15:48:08.948979 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.949060 kubelet[2780]: W1105 15:48:08.948989 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.949060 kubelet[2780]: E1105 15:48:08.948997 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.949304 kubelet[2780]: E1105 15:48:08.949282 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.949782 kubelet[2780]: E1105 15:48:08.949765 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.950007 kubelet[2780]: W1105 15:48:08.949991 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.950212 kubelet[2780]: E1105 15:48:08.950100 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.950212 kubelet[2780]: E1105 15:48:08.950127 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.950212 kubelet[2780]: E1105 15:48:08.950142 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.951444 kubelet[2780]: E1105 15:48:08.951313 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.951444 kubelet[2780]: W1105 15:48:08.951327 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.951444 kubelet[2780]: E1105 15:48:08.951337 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.951619 kubelet[2780]: E1105 15:48:08.951581 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:08.951619 kubelet[2780]: W1105 15:48:08.951592 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:08.951619 kubelet[2780]: E1105 15:48:08.951601 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:08.959645 containerd[1603]: time="2025-11-05T15:48:08.959612150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-745d744857-kc6v8,Uid:e0ce1caa-4a51-4aa5-afc0-156eb40ce16a,Namespace:calico-system,Attempt:0,} returns sandbox id \"54779b638be13165c1218c387f32bf0720c3ba0823d62181106df456ca6a6e77\"" Nov 5 15:48:08.960630 kubelet[2780]: E1105 15:48:08.960426 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:48:08.961707 containerd[1603]: time="2025-11-05T15:48:08.961684748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 5 15:48:08.992433 kubelet[2780]: E1105 15:48:08.992399 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:48:08.992891 containerd[1603]: time="2025-11-05T15:48:08.992820786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r22v8,Uid:a43adf3c-1c45-4d00-978e-3ce9c3ba5e6c,Namespace:calico-system,Attempt:0,}" Nov 5 15:48:09.007276 containerd[1603]: time="2025-11-05T15:48:09.007087982Z" level=info msg="connecting to shim 4143999e99c580ef80aebaebcd9eaf15e733e7b4dee12e04acfa12795be26ddb" address="unix:///run/containerd/s/0f2557fb60739477c762ab1887ef935624f080e176ea39ac440b6e7f87455bce" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:48:09.034893 systemd[1]: Started cri-containerd-4143999e99c580ef80aebaebcd9eaf15e733e7b4dee12e04acfa12795be26ddb.scope - libcontainer container 4143999e99c580ef80aebaebcd9eaf15e733e7b4dee12e04acfa12795be26ddb. Nov 5 15:48:09.047018 kubelet[2780]: E1105 15:48:09.046980 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:09.047018 kubelet[2780]: W1105 15:48:09.047003 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:09.047253 kubelet[2780]: E1105 15:48:09.047041 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:09.047294 kubelet[2780]: E1105 15:48:09.047284 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:09.047294 kubelet[2780]: W1105 15:48:09.047292 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:09.051684 kubelet[2780]: E1105 15:48:09.047312 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:09.051684 kubelet[2780]: E1105 15:48:09.051558 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:09.051684 kubelet[2780]: W1105 15:48:09.051566 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:09.051684 kubelet[2780]: E1105 15:48:09.051615 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:09.052145 kubelet[2780]: E1105 15:48:09.052120 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:09.052145 kubelet[2780]: W1105 15:48:09.052136 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:09.052276 kubelet[2780]: E1105 15:48:09.052154 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:09.052420 kubelet[2780]: E1105 15:48:09.052394 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:09.052420 kubelet[2780]: W1105 15:48:09.052406 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:09.052612 kubelet[2780]: E1105 15:48:09.052582 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:09.052612 kubelet[2780]: W1105 15:48:09.052593 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:09.052612 kubelet[2780]: E1105 15:48:09.052602 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:09.053487 kubelet[2780]: E1105 15:48:09.052688 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:09.053487 kubelet[2780]: E1105 15:48:09.053182 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:09.053487 kubelet[2780]: W1105 15:48:09.053191 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:09.053487 kubelet[2780]: E1105 15:48:09.053240 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:09.053596 kubelet[2780]: E1105 15:48:09.053581 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:09.053596 kubelet[2780]: W1105 15:48:09.053589 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:09.054023 kubelet[2780]: E1105 15:48:09.054001 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:09.054023 kubelet[2780]: W1105 15:48:09.054016 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:09.054023 kubelet[2780]: E1105 15:48:09.054026 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:09.054378 kubelet[2780]: E1105 15:48:09.054256 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:09.054378 kubelet[2780]: W1105 15:48:09.054268 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:09.054378 kubelet[2780]: E1105 15:48:09.054276 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:09.054635 kubelet[2780]: E1105 15:48:09.054482 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:09.054635 kubelet[2780]: W1105 15:48:09.054495 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:09.054635 kubelet[2780]: E1105 15:48:09.054503 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:09.054885 kubelet[2780]: E1105 15:48:09.054710 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:09.054885 kubelet[2780]: W1105 15:48:09.054758 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:09.054885 kubelet[2780]: E1105 15:48:09.054767 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:09.055144 kubelet[2780]: E1105 15:48:09.055017 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:09.055144 kubelet[2780]: W1105 15:48:09.055029 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:09.055144 kubelet[2780]: E1105 15:48:09.055039 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:09.055223 kubelet[2780]: E1105 15:48:09.055197 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:09.055223 kubelet[2780]: W1105 15:48:09.055204 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:09.055223 kubelet[2780]: E1105 15:48:09.055212 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:09.055553 kubelet[2780]: E1105 15:48:09.055364 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:09.055553 kubelet[2780]: W1105 15:48:09.055377 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:09.055553 kubelet[2780]: E1105 15:48:09.055384 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:09.055553 kubelet[2780]: E1105 15:48:09.053616 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:09.055655 kubelet[2780]: E1105 15:48:09.055589 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:09.055655 kubelet[2780]: W1105 15:48:09.055596 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:09.055655 kubelet[2780]: E1105 15:48:09.055604 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:09.056151 kubelet[2780]: E1105 15:48:09.055854 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:09.056151 kubelet[2780]: W1105 15:48:09.055862 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:09.056151 kubelet[2780]: E1105 15:48:09.055886 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:09.056151 kubelet[2780]: E1105 15:48:09.056119 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:09.056151 kubelet[2780]: W1105 15:48:09.056128 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:09.056287 kubelet[2780]: E1105 15:48:09.056183 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:09.056534 kubelet[2780]: E1105 15:48:09.056365 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:09.056534 kubelet[2780]: W1105 15:48:09.056401 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:09.056534 kubelet[2780]: E1105 15:48:09.056409 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:09.056797 kubelet[2780]: E1105 15:48:09.056617 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:09.056797 kubelet[2780]: W1105 15:48:09.056624 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:09.056797 kubelet[2780]: E1105 15:48:09.056644 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:09.057083 kubelet[2780]: E1105 15:48:09.056873 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:09.057083 kubelet[2780]: W1105 15:48:09.056884 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:09.057083 kubelet[2780]: E1105 15:48:09.056905 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:09.057169 kubelet[2780]: E1105 15:48:09.057152 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:09.057169 kubelet[2780]: W1105 15:48:09.057160 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:09.057210 kubelet[2780]: E1105 15:48:09.057197 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:09.058745 kubelet[2780]: E1105 15:48:09.057854 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:09.058745 kubelet[2780]: W1105 15:48:09.057866 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:09.058745 kubelet[2780]: E1105 15:48:09.057943 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:09.058745 kubelet[2780]: E1105 15:48:09.058257 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:09.058745 kubelet[2780]: W1105 15:48:09.058264 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:09.058745 kubelet[2780]: E1105 15:48:09.058273 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:09.058745 kubelet[2780]: E1105 15:48:09.058532 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:09.058745 kubelet[2780]: W1105 15:48:09.058539 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:09.058745 kubelet[2780]: E1105 15:48:09.058548 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:09.091981 kubelet[2780]: E1105 15:48:09.091942 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:09.091981 kubelet[2780]: W1105 15:48:09.091968 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:09.092121 kubelet[2780]: E1105 15:48:09.092005 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:09.096987 containerd[1603]: time="2025-11-05T15:48:09.096863862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r22v8,Uid:a43adf3c-1c45-4d00-978e-3ce9c3ba5e6c,Namespace:calico-system,Attempt:0,} returns sandbox id \"4143999e99c580ef80aebaebcd9eaf15e733e7b4dee12e04acfa12795be26ddb\"" Nov 5 15:48:09.097709 kubelet[2780]: E1105 15:48:09.097688 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:48:10.168804 containerd[1603]: time="2025-11-05T15:48:10.168759140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:10.169898 containerd[1603]: time="2025-11-05T15:48:10.169621300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 5 15:48:10.170412 containerd[1603]: time="2025-11-05T15:48:10.170375969Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:10.172224 containerd[1603]: time="2025-11-05T15:48:10.172184827Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:10.172798 containerd[1603]: time="2025-11-05T15:48:10.172759106Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.210900109s" Nov 5 15:48:10.172887 containerd[1603]: time="2025-11-05T15:48:10.172870886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 5 15:48:10.175041 containerd[1603]: time="2025-11-05T15:48:10.174622845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 5 15:48:10.193146 containerd[1603]: time="2025-11-05T15:48:10.191910517Z" level=info msg="CreateContainer within sandbox \"54779b638be13165c1218c387f32bf0720c3ba0823d62181106df456ca6a6e77\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 5 15:48:10.204502 containerd[1603]: time="2025-11-05T15:48:10.201978707Z" level=info msg="Container 4c2994f8b72028746a380ca22a8a884f86533b70c98f80bcac1e36e400e45923: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:48:10.205379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount188645888.mount: Deactivated successfully. Nov 5 15:48:10.210578 containerd[1603]: time="2025-11-05T15:48:10.210540089Z" level=info msg="CreateContainer within sandbox \"54779b638be13165c1218c387f32bf0720c3ba0823d62181106df456ca6a6e77\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4c2994f8b72028746a380ca22a8a884f86533b70c98f80bcac1e36e400e45923\"" Nov 5 15:48:10.211690 containerd[1603]: time="2025-11-05T15:48:10.211601158Z" level=info msg="StartContainer for \"4c2994f8b72028746a380ca22a8a884f86533b70c98f80bcac1e36e400e45923\"" Nov 5 15:48:10.212901 containerd[1603]: time="2025-11-05T15:48:10.212880146Z" level=info msg="connecting to shim 4c2994f8b72028746a380ca22a8a884f86533b70c98f80bcac1e36e400e45923" address="unix:///run/containerd/s/13b150baa813842b404e8304803a586a484de87e3c7c16b7caf119c74c1e4f19" protocol=ttrpc version=3 Nov 5 15:48:10.238186 systemd[1]: Started cri-containerd-4c2994f8b72028746a380ca22a8a884f86533b70c98f80bcac1e36e400e45923.scope - libcontainer container 4c2994f8b72028746a380ca22a8a884f86533b70c98f80bcac1e36e400e45923. Nov 5 15:48:10.304593 containerd[1603]: time="2025-11-05T15:48:10.304561605Z" level=info msg="StartContainer for \"4c2994f8b72028746a380ca22a8a884f86533b70c98f80bcac1e36e400e45923\" returns successfully" Nov 5 15:48:10.311692 kubelet[2780]: E1105 15:48:10.311651 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6slgh" podUID="175c15d8-2ca8-4a9b-b355-438a1e3fa9fd" Nov 5 15:48:10.391760 kubelet[2780]: E1105 15:48:10.391683 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:48:10.455314 kubelet[2780]: E1105 15:48:10.455194 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.455314 kubelet[2780]: W1105 15:48:10.455218 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.455314 kubelet[2780]: E1105 15:48:10.455238 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.455507 kubelet[2780]: E1105 15:48:10.455466 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.455507 kubelet[2780]: W1105 15:48:10.455475 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.455507 kubelet[2780]: E1105 15:48:10.455484 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.455692 kubelet[2780]: E1105 15:48:10.455659 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.455692 kubelet[2780]: W1105 15:48:10.455674 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.455692 kubelet[2780]: E1105 15:48:10.455682 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.456624 kubelet[2780]: E1105 15:48:10.456598 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.456624 kubelet[2780]: W1105 15:48:10.456615 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.456624 kubelet[2780]: E1105 15:48:10.456624 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.456967 kubelet[2780]: E1105 15:48:10.456839 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.456967 kubelet[2780]: W1105 15:48:10.456847 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.456967 kubelet[2780]: E1105 15:48:10.456855 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.457035 kubelet[2780]: E1105 15:48:10.457019 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.457035 kubelet[2780]: W1105 15:48:10.457026 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.457035 kubelet[2780]: E1105 15:48:10.457034 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.457508 kubelet[2780]: E1105 15:48:10.457489 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.457508 kubelet[2780]: W1105 15:48:10.457503 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.457579 kubelet[2780]: E1105 15:48:10.457511 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.457744 kubelet[2780]: E1105 15:48:10.457702 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.457744 kubelet[2780]: W1105 15:48:10.457716 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.457810 kubelet[2780]: E1105 15:48:10.457740 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.458861 kubelet[2780]: E1105 15:48:10.458836 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.458861 kubelet[2780]: W1105 15:48:10.458855 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.458861 kubelet[2780]: E1105 15:48:10.458865 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.459765 kubelet[2780]: E1105 15:48:10.459230 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.459765 kubelet[2780]: W1105 15:48:10.459245 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.459765 kubelet[2780]: E1105 15:48:10.459253 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.459765 kubelet[2780]: E1105 15:48:10.459418 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.459765 kubelet[2780]: W1105 15:48:10.459426 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.459765 kubelet[2780]: E1105 15:48:10.459433 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.459765 kubelet[2780]: E1105 15:48:10.459597 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.459765 kubelet[2780]: W1105 15:48:10.459604 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.459765 kubelet[2780]: E1105 15:48:10.459611 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.460626 kubelet[2780]: E1105 15:48:10.459825 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.460626 kubelet[2780]: W1105 15:48:10.459833 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.460626 kubelet[2780]: E1105 15:48:10.459841 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.460626 kubelet[2780]: E1105 15:48:10.460200 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.460626 kubelet[2780]: W1105 15:48:10.460207 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.460626 kubelet[2780]: E1105 15:48:10.460214 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.460626 kubelet[2780]: E1105 15:48:10.460369 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.460626 kubelet[2780]: W1105 15:48:10.460376 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.460626 kubelet[2780]: E1105 15:48:10.460383 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.468674 kubelet[2780]: E1105 15:48:10.468653 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.468674 kubelet[2780]: W1105 15:48:10.468669 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.468783 kubelet[2780]: E1105 15:48:10.468681 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.468948 kubelet[2780]: E1105 15:48:10.468921 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.468997 kubelet[2780]: W1105 15:48:10.468978 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.469022 kubelet[2780]: E1105 15:48:10.469000 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.469353 kubelet[2780]: E1105 15:48:10.469337 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.469353 kubelet[2780]: W1105 15:48:10.469350 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.469427 kubelet[2780]: E1105 15:48:10.469373 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.469699 kubelet[2780]: E1105 15:48:10.469677 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.469699 kubelet[2780]: W1105 15:48:10.469693 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.469777 kubelet[2780]: E1105 15:48:10.469710 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.469977 kubelet[2780]: E1105 15:48:10.469899 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.469977 kubelet[2780]: W1105 15:48:10.469911 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.469977 kubelet[2780]: E1105 15:48:10.469919 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.470114 kubelet[2780]: E1105 15:48:10.470094 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.470165 kubelet[2780]: W1105 15:48:10.470129 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.470165 kubelet[2780]: E1105 15:48:10.470150 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.470373 kubelet[2780]: E1105 15:48:10.470341 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.470405 kubelet[2780]: W1105 15:48:10.470375 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.470405 kubelet[2780]: E1105 15:48:10.470396 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.470623 kubelet[2780]: E1105 15:48:10.470604 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.470623 kubelet[2780]: W1105 15:48:10.470615 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.470688 kubelet[2780]: E1105 15:48:10.470636 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.471043 kubelet[2780]: E1105 15:48:10.470986 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.471043 kubelet[2780]: W1105 15:48:10.470998 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.471043 kubelet[2780]: E1105 15:48:10.471010 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.471227 kubelet[2780]: E1105 15:48:10.471214 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.471227 kubelet[2780]: W1105 15:48:10.471222 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.471445 kubelet[2780]: E1105 15:48:10.471299 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.471609 kubelet[2780]: E1105 15:48:10.471590 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.471609 kubelet[2780]: W1105 15:48:10.471606 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.471783 kubelet[2780]: E1105 15:48:10.471766 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.472132 kubelet[2780]: E1105 15:48:10.472071 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.472132 kubelet[2780]: W1105 15:48:10.472085 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.472132 kubelet[2780]: E1105 15:48:10.472105 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.472375 kubelet[2780]: E1105 15:48:10.472317 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.472375 kubelet[2780]: W1105 15:48:10.472329 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.472375 kubelet[2780]: E1105 15:48:10.472349 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.472622 kubelet[2780]: E1105 15:48:10.472600 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.472622 kubelet[2780]: W1105 15:48:10.472616 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.472622 kubelet[2780]: E1105 15:48:10.472624 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.473184 kubelet[2780]: E1105 15:48:10.472971 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.473184 kubelet[2780]: W1105 15:48:10.472987 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.473246 kubelet[2780]: E1105 15:48:10.473197 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.473449 kubelet[2780]: E1105 15:48:10.473428 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.473449 kubelet[2780]: W1105 15:48:10.473443 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.473803 kubelet[2780]: E1105 15:48:10.473777 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.474773 kubelet[2780]: E1105 15:48:10.474379 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.474773 kubelet[2780]: W1105 15:48:10.474393 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.474773 kubelet[2780]: E1105 15:48:10.474401 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.474773 kubelet[2780]: E1105 15:48:10.474564 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:48:10.474773 kubelet[2780]: W1105 15:48:10.474572 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:48:10.474773 kubelet[2780]: E1105 15:48:10.474579 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:48:10.977879 containerd[1603]: time="2025-11-05T15:48:10.977823881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:10.979190 containerd[1603]: time="2025-11-05T15:48:10.979166940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 5 15:48:10.979613 containerd[1603]: time="2025-11-05T15:48:10.979594560Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:10.982786 containerd[1603]: time="2025-11-05T15:48:10.982762296Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:10.983651 containerd[1603]: time="2025-11-05T15:48:10.983435996Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 806.926723ms" Nov 5 15:48:10.983651 containerd[1603]: time="2025-11-05T15:48:10.983463746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 5 15:48:10.987572 containerd[1603]: time="2025-11-05T15:48:10.987551592Z" level=info msg="CreateContainer within sandbox \"4143999e99c580ef80aebaebcd9eaf15e733e7b4dee12e04acfa12795be26ddb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 5 15:48:11.004278 containerd[1603]: time="2025-11-05T15:48:11.003250666Z" level=info msg="Container cdd8940b0668374155ee78d2b28bd34caf8ae6cf90b94799c3bd595c7e6a6f0a: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:48:11.003768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3338475523.mount: Deactivated successfully. Nov 5 15:48:11.012529 containerd[1603]: time="2025-11-05T15:48:11.012412867Z" level=info msg="CreateContainer within sandbox \"4143999e99c580ef80aebaebcd9eaf15e733e7b4dee12e04acfa12795be26ddb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"cdd8940b0668374155ee78d2b28bd34caf8ae6cf90b94799c3bd595c7e6a6f0a\"" Nov 5 15:48:11.013286 containerd[1603]: time="2025-11-05T15:48:11.013261026Z" level=info msg="StartContainer for \"cdd8940b0668374155ee78d2b28bd34caf8ae6cf90b94799c3bd595c7e6a6f0a\"" Nov 5 15:48:11.015454 containerd[1603]: time="2025-11-05T15:48:11.015419564Z" level=info msg="connecting to shim cdd8940b0668374155ee78d2b28bd34caf8ae6cf90b94799c3bd595c7e6a6f0a" address="unix:///run/containerd/s/0f2557fb60739477c762ab1887ef935624f080e176ea39ac440b6e7f87455bce" protocol=ttrpc version=3 Nov 5 15:48:11.042855 systemd[1]: Started cri-containerd-cdd8940b0668374155ee78d2b28bd34caf8ae6cf90b94799c3bd595c7e6a6f0a.scope - libcontainer container cdd8940b0668374155ee78d2b28bd34caf8ae6cf90b94799c3bd595c7e6a6f0a. Nov 5 15:48:11.091373 containerd[1603]: time="2025-11-05T15:48:11.091303498Z" level=info msg="StartContainer for \"cdd8940b0668374155ee78d2b28bd34caf8ae6cf90b94799c3bd595c7e6a6f0a\" returns successfully" Nov 5 15:48:11.107918 systemd[1]: cri-containerd-cdd8940b0668374155ee78d2b28bd34caf8ae6cf90b94799c3bd595c7e6a6f0a.scope: Deactivated successfully. Nov 5 15:48:11.110187 containerd[1603]: time="2025-11-05T15:48:11.110161129Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cdd8940b0668374155ee78d2b28bd34caf8ae6cf90b94799c3bd595c7e6a6f0a\" id:\"cdd8940b0668374155ee78d2b28bd34caf8ae6cf90b94799c3bd595c7e6a6f0a\" pid:3482 exited_at:{seconds:1762357691 nanos:109816079}" Nov 5 15:48:11.110351 containerd[1603]: time="2025-11-05T15:48:11.110242109Z" level=info msg="received exit event container_id:\"cdd8940b0668374155ee78d2b28bd34caf8ae6cf90b94799c3bd595c7e6a6f0a\" id:\"cdd8940b0668374155ee78d2b28bd34caf8ae6cf90b94799c3bd595c7e6a6f0a\" pid:3482 exited_at:{seconds:1762357691 nanos:109816079}" Nov 5 15:48:11.133291 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cdd8940b0668374155ee78d2b28bd34caf8ae6cf90b94799c3bd595c7e6a6f0a-rootfs.mount: Deactivated successfully. Nov 5 15:48:11.395761 kubelet[2780]: I1105 15:48:11.394906 2780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 15:48:11.395761 kubelet[2780]: E1105 15:48:11.395298 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:48:11.397959 kubelet[2780]: E1105 15:48:11.397819 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:48:11.399028 containerd[1603]: time="2025-11-05T15:48:11.398995120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 5 15:48:11.416059 kubelet[2780]: I1105 15:48:11.414911 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-745d744857-kc6v8" podStartSLOduration=2.202449487 podStartE2EDuration="3.414873434s" podCreationTimestamp="2025-11-05 15:48:08 +0000 UTC" firstStartedPulling="2025-11-05 15:48:08.961382548 +0000 UTC m=+21.740617430" lastFinishedPulling="2025-11-05 15:48:10.173806495 +0000 UTC m=+22.953041377" observedRunningTime="2025-11-05 15:48:10.416626753 +0000 UTC m=+23.195861635" watchObservedRunningTime="2025-11-05 15:48:11.414873434 +0000 UTC m=+24.194108316" Nov 5 15:48:12.312069 kubelet[2780]: E1105 15:48:12.312005 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6slgh" podUID="175c15d8-2ca8-4a9b-b355-438a1e3fa9fd" Nov 5 15:48:13.380653 containerd[1603]: time="2025-11-05T15:48:13.380503199Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:13.383577 containerd[1603]: time="2025-11-05T15:48:13.381757017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 5 15:48:13.384612 containerd[1603]: time="2025-11-05T15:48:13.383716535Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:13.386767 containerd[1603]: time="2025-11-05T15:48:13.386710982Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:13.387184 containerd[1603]: time="2025-11-05T15:48:13.387147722Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 1.988116812s" Nov 5 15:48:13.387184 containerd[1603]: time="2025-11-05T15:48:13.387181102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 5 15:48:13.394398 containerd[1603]: time="2025-11-05T15:48:13.394354675Z" level=info msg="CreateContainer within sandbox \"4143999e99c580ef80aebaebcd9eaf15e733e7b4dee12e04acfa12795be26ddb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 5 15:48:13.402276 containerd[1603]: time="2025-11-05T15:48:13.401983637Z" level=info msg="Container ad3a21f3341aa70d23e840893340710547f2549b93e5e6a1e9cb66b4c5791fdf: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:48:13.408122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1914021088.mount: Deactivated successfully. Nov 5 15:48:13.418998 containerd[1603]: time="2025-11-05T15:48:13.418965890Z" level=info msg="CreateContainer within sandbox \"4143999e99c580ef80aebaebcd9eaf15e733e7b4dee12e04acfa12795be26ddb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ad3a21f3341aa70d23e840893340710547f2549b93e5e6a1e9cb66b4c5791fdf\"" Nov 5 15:48:13.421290 containerd[1603]: time="2025-11-05T15:48:13.419636490Z" level=info msg="StartContainer for \"ad3a21f3341aa70d23e840893340710547f2549b93e5e6a1e9cb66b4c5791fdf\"" Nov 5 15:48:13.421290 containerd[1603]: time="2025-11-05T15:48:13.421204768Z" level=info msg="connecting to shim ad3a21f3341aa70d23e840893340710547f2549b93e5e6a1e9cb66b4c5791fdf" address="unix:///run/containerd/s/0f2557fb60739477c762ab1887ef935624f080e176ea39ac440b6e7f87455bce" protocol=ttrpc version=3 Nov 5 15:48:13.455855 systemd[1]: Started cri-containerd-ad3a21f3341aa70d23e840893340710547f2549b93e5e6a1e9cb66b4c5791fdf.scope - libcontainer container ad3a21f3341aa70d23e840893340710547f2549b93e5e6a1e9cb66b4c5791fdf. Nov 5 15:48:13.513177 containerd[1603]: time="2025-11-05T15:48:13.513120556Z" level=info msg="StartContainer for \"ad3a21f3341aa70d23e840893340710547f2549b93e5e6a1e9cb66b4c5791fdf\" returns successfully" Nov 5 15:48:14.052674 containerd[1603]: time="2025-11-05T15:48:14.052632537Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 15:48:14.055939 systemd[1]: cri-containerd-ad3a21f3341aa70d23e840893340710547f2549b93e5e6a1e9cb66b4c5791fdf.scope: Deactivated successfully. Nov 5 15:48:14.056266 systemd[1]: cri-containerd-ad3a21f3341aa70d23e840893340710547f2549b93e5e6a1e9cb66b4c5791fdf.scope: Consumed 576ms CPU time, 197.9M memory peak, 171.3M written to disk. Nov 5 15:48:14.058965 containerd[1603]: time="2025-11-05T15:48:14.057597672Z" level=info msg="received exit event container_id:\"ad3a21f3341aa70d23e840893340710547f2549b93e5e6a1e9cb66b4c5791fdf\" id:\"ad3a21f3341aa70d23e840893340710547f2549b93e5e6a1e9cb66b4c5791fdf\" pid:3539 exited_at:{seconds:1762357694 nanos:57329292}" Nov 5 15:48:14.058965 containerd[1603]: time="2025-11-05T15:48:14.057911281Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad3a21f3341aa70d23e840893340710547f2549b93e5e6a1e9cb66b4c5791fdf\" id:\"ad3a21f3341aa70d23e840893340710547f2549b93e5e6a1e9cb66b4c5791fdf\" pid:3539 exited_at:{seconds:1762357694 nanos:57329292}" Nov 5 15:48:14.087951 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad3a21f3341aa70d23e840893340710547f2549b93e5e6a1e9cb66b4c5791fdf-rootfs.mount: Deactivated successfully. Nov 5 15:48:14.121900 kubelet[2780]: I1105 15:48:14.121872 2780 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 5 15:48:14.165941 systemd[1]: Created slice kubepods-burstable-pod765f74a1_9298_4f11_a1c1_9dfc8dc0f7ac.slice - libcontainer container kubepods-burstable-pod765f74a1_9298_4f11_a1c1_9dfc8dc0f7ac.slice. Nov 5 15:48:14.180113 systemd[1]: Created slice kubepods-burstable-pod121b1dfc_e268_4e0e_8768_d86d30928206.slice - libcontainer container kubepods-burstable-pod121b1dfc_e268_4e0e_8768_d86d30928206.slice. Nov 5 15:48:14.193534 systemd[1]: Created slice kubepods-besteffort-pod04475037_4abe_43a1_ba27_907888160a07.slice - libcontainer container kubepods-besteffort-pod04475037_4abe_43a1_ba27_907888160a07.slice. Nov 5 15:48:14.198176 kubelet[2780]: I1105 15:48:14.198140 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b4571ca-2142-4ad0-85e2-e8b00b2fb524-goldmane-ca-bundle\") pod \"goldmane-666569f655-qtwjt\" (UID: \"3b4571ca-2142-4ad0-85e2-e8b00b2fb524\") " pod="calico-system/goldmane-666569f655-qtwjt" Nov 5 15:48:14.198279 kubelet[2780]: I1105 15:48:14.198179 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3b4571ca-2142-4ad0-85e2-e8b00b2fb524-goldmane-key-pair\") pod \"goldmane-666569f655-qtwjt\" (UID: \"3b4571ca-2142-4ad0-85e2-e8b00b2fb524\") " pod="calico-system/goldmane-666569f655-qtwjt" Nov 5 15:48:14.198279 kubelet[2780]: I1105 15:48:14.198199 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktptk\" (UniqueName: \"kubernetes.io/projected/3b4571ca-2142-4ad0-85e2-e8b00b2fb524-kube-api-access-ktptk\") pod \"goldmane-666569f655-qtwjt\" (UID: \"3b4571ca-2142-4ad0-85e2-e8b00b2fb524\") " pod="calico-system/goldmane-666569f655-qtwjt" Nov 5 15:48:14.198279 kubelet[2780]: I1105 15:48:14.198214 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04475037-4abe-43a1-ba27-907888160a07-whisker-ca-bundle\") pod \"whisker-6cbc8c6f7d-qrz8n\" (UID: \"04475037-4abe-43a1-ba27-907888160a07\") " pod="calico-system/whisker-6cbc8c6f7d-qrz8n" Nov 5 15:48:14.198279 kubelet[2780]: I1105 15:48:14.198230 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cssb\" (UniqueName: \"kubernetes.io/projected/e61d1196-bf4a-4bdd-877c-9ea9a871d23c-kube-api-access-2cssb\") pod \"calico-apiserver-6644f9d4c6-74nvz\" (UID: \"e61d1196-bf4a-4bdd-877c-9ea9a871d23c\") " pod="calico-apiserver/calico-apiserver-6644f9d4c6-74nvz" Nov 5 15:48:14.198279 kubelet[2780]: I1105 15:48:14.198248 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrvlm\" (UniqueName: \"kubernetes.io/projected/d8afeb1c-714d-4335-9a1d-a1135daaa2b3-kube-api-access-nrvlm\") pod \"calico-apiserver-6644f9d4c6-px2b8\" (UID: \"d8afeb1c-714d-4335-9a1d-a1135daaa2b3\") " pod="calico-apiserver/calico-apiserver-6644f9d4c6-px2b8" Nov 5 15:48:14.198399 kubelet[2780]: I1105 15:48:14.198266 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e61d1196-bf4a-4bdd-877c-9ea9a871d23c-calico-apiserver-certs\") pod \"calico-apiserver-6644f9d4c6-74nvz\" (UID: \"e61d1196-bf4a-4bdd-877c-9ea9a871d23c\") " pod="calico-apiserver/calico-apiserver-6644f9d4c6-74nvz" Nov 5 15:48:14.198399 kubelet[2780]: I1105 15:48:14.198280 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/04475037-4abe-43a1-ba27-907888160a07-whisker-backend-key-pair\") pod \"whisker-6cbc8c6f7d-qrz8n\" (UID: \"04475037-4abe-43a1-ba27-907888160a07\") " pod="calico-system/whisker-6cbc8c6f7d-qrz8n" Nov 5 15:48:14.198399 kubelet[2780]: I1105 15:48:14.198303 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9hf5\" (UniqueName: \"kubernetes.io/projected/04475037-4abe-43a1-ba27-907888160a07-kube-api-access-k9hf5\") pod \"whisker-6cbc8c6f7d-qrz8n\" (UID: \"04475037-4abe-43a1-ba27-907888160a07\") " pod="calico-system/whisker-6cbc8c6f7d-qrz8n" Nov 5 15:48:14.198399 kubelet[2780]: I1105 15:48:14.198317 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d8afeb1c-714d-4335-9a1d-a1135daaa2b3-calico-apiserver-certs\") pod \"calico-apiserver-6644f9d4c6-px2b8\" (UID: \"d8afeb1c-714d-4335-9a1d-a1135daaa2b3\") " pod="calico-apiserver/calico-apiserver-6644f9d4c6-px2b8" Nov 5 15:48:14.198399 kubelet[2780]: I1105 15:48:14.198332 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df9w9\" (UniqueName: \"kubernetes.io/projected/121b1dfc-e268-4e0e-8768-d86d30928206-kube-api-access-df9w9\") pod \"coredns-668d6bf9bc-zgrpw\" (UID: \"121b1dfc-e268-4e0e-8768-d86d30928206\") " pod="kube-system/coredns-668d6bf9bc-zgrpw" Nov 5 15:48:14.198507 kubelet[2780]: I1105 15:48:14.198347 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/765f74a1-9298-4f11-a1c1-9dfc8dc0f7ac-config-volume\") pod \"coredns-668d6bf9bc-77dvp\" (UID: \"765f74a1-9298-4f11-a1c1-9dfc8dc0f7ac\") " pod="kube-system/coredns-668d6bf9bc-77dvp" Nov 5 15:48:14.198507 kubelet[2780]: I1105 15:48:14.198363 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b4571ca-2142-4ad0-85e2-e8b00b2fb524-config\") pod \"goldmane-666569f655-qtwjt\" (UID: \"3b4571ca-2142-4ad0-85e2-e8b00b2fb524\") " pod="calico-system/goldmane-666569f655-qtwjt" Nov 5 15:48:14.198507 kubelet[2780]: I1105 15:48:14.198379 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e90854ce-cf6e-4a39-9e2a-1f06e654f065-tigera-ca-bundle\") pod \"calico-kube-controllers-5fc44484bc-vfb78\" (UID: \"e90854ce-cf6e-4a39-9e2a-1f06e654f065\") " pod="calico-system/calico-kube-controllers-5fc44484bc-vfb78" Nov 5 15:48:14.198507 kubelet[2780]: I1105 15:48:14.198397 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xbbv\" (UniqueName: \"kubernetes.io/projected/765f74a1-9298-4f11-a1c1-9dfc8dc0f7ac-kube-api-access-8xbbv\") pod \"coredns-668d6bf9bc-77dvp\" (UID: \"765f74a1-9298-4f11-a1c1-9dfc8dc0f7ac\") " pod="kube-system/coredns-668d6bf9bc-77dvp" Nov 5 15:48:14.198507 kubelet[2780]: I1105 15:48:14.198413 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtp75\" (UniqueName: \"kubernetes.io/projected/e90854ce-cf6e-4a39-9e2a-1f06e654f065-kube-api-access-jtp75\") pod \"calico-kube-controllers-5fc44484bc-vfb78\" (UID: \"e90854ce-cf6e-4a39-9e2a-1f06e654f065\") " pod="calico-system/calico-kube-controllers-5fc44484bc-vfb78" Nov 5 15:48:14.198619 kubelet[2780]: I1105 15:48:14.198432 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/121b1dfc-e268-4e0e-8768-d86d30928206-config-volume\") pod \"coredns-668d6bf9bc-zgrpw\" (UID: \"121b1dfc-e268-4e0e-8768-d86d30928206\") " pod="kube-system/coredns-668d6bf9bc-zgrpw" Nov 5 15:48:14.206958 systemd[1]: Created slice kubepods-besteffort-pod3b4571ca_2142_4ad0_85e2_e8b00b2fb524.slice - libcontainer container kubepods-besteffort-pod3b4571ca_2142_4ad0_85e2_e8b00b2fb524.slice. Nov 5 15:48:14.218120 systemd[1]: Created slice kubepods-besteffort-pode61d1196_bf4a_4bdd_877c_9ea9a871d23c.slice - libcontainer container kubepods-besteffort-pode61d1196_bf4a_4bdd_877c_9ea9a871d23c.slice. Nov 5 15:48:14.226667 systemd[1]: Created slice kubepods-besteffort-pode90854ce_cf6e_4a39_9e2a_1f06e654f065.slice - libcontainer container kubepods-besteffort-pode90854ce_cf6e_4a39_9e2a_1f06e654f065.slice. Nov 5 15:48:14.234226 systemd[1]: Created slice kubepods-besteffort-podd8afeb1c_714d_4335_9a1d_a1135daaa2b3.slice - libcontainer container kubepods-besteffort-podd8afeb1c_714d_4335_9a1d_a1135daaa2b3.slice. Nov 5 15:48:14.346863 systemd[1]: Created slice kubepods-besteffort-pod175c15d8_2ca8_4a9b_b355_438a1e3fa9fd.slice - libcontainer container kubepods-besteffort-pod175c15d8_2ca8_4a9b_b355_438a1e3fa9fd.slice. Nov 5 15:48:14.349651 containerd[1603]: time="2025-11-05T15:48:14.349619380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6slgh,Uid:175c15d8-2ca8-4a9b-b355-438a1e3fa9fd,Namespace:calico-system,Attempt:0,}" Nov 5 15:48:14.410715 containerd[1603]: time="2025-11-05T15:48:14.410657288Z" level=error msg="Failed to destroy network for sandbox \"16b82cedcf6b2101a309cfd5d8797b37f5fce23a2a29f95d2d74cc9a6acc838f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:14.412892 containerd[1603]: time="2025-11-05T15:48:14.412820346Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6slgh,Uid:175c15d8-2ca8-4a9b-b355-438a1e3fa9fd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"16b82cedcf6b2101a309cfd5d8797b37f5fce23a2a29f95d2d74cc9a6acc838f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:14.413227 kubelet[2780]: E1105 15:48:14.413171 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16b82cedcf6b2101a309cfd5d8797b37f5fce23a2a29f95d2d74cc9a6acc838f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:14.413619 kubelet[2780]: E1105 15:48:14.413344 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16b82cedcf6b2101a309cfd5d8797b37f5fce23a2a29f95d2d74cc9a6acc838f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6slgh" Nov 5 15:48:14.413619 kubelet[2780]: E1105 15:48:14.413367 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16b82cedcf6b2101a309cfd5d8797b37f5fce23a2a29f95d2d74cc9a6acc838f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6slgh" Nov 5 15:48:14.413619 kubelet[2780]: E1105 15:48:14.413409 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6slgh_calico-system(175c15d8-2ca8-4a9b-b355-438a1e3fa9fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6slgh_calico-system(175c15d8-2ca8-4a9b-b355-438a1e3fa9fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16b82cedcf6b2101a309cfd5d8797b37f5fce23a2a29f95d2d74cc9a6acc838f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6slgh" podUID="175c15d8-2ca8-4a9b-b355-438a1e3fa9fd" Nov 5 15:48:14.418252 kubelet[2780]: E1105 15:48:14.418237 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:48:14.419384 containerd[1603]: time="2025-11-05T15:48:14.419337680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 5 15:48:14.437850 systemd[1]: run-netns-cni\x2db215cc1d\x2d8dde\x2d715f\x2d330c\x2de5aaf4329ad8.mount: Deactivated successfully. Nov 5 15:48:14.473536 kubelet[2780]: E1105 15:48:14.473504 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:48:14.474292 containerd[1603]: time="2025-11-05T15:48:14.474239205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-77dvp,Uid:765f74a1-9298-4f11-a1c1-9dfc8dc0f7ac,Namespace:kube-system,Attempt:0,}" Nov 5 15:48:14.485755 kubelet[2780]: E1105 15:48:14.485709 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:48:14.489210 containerd[1603]: time="2025-11-05T15:48:14.489077550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zgrpw,Uid:121b1dfc-e268-4e0e-8768-d86d30928206,Namespace:kube-system,Attempt:0,}" Nov 5 15:48:14.501916 containerd[1603]: time="2025-11-05T15:48:14.501753987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cbc8c6f7d-qrz8n,Uid:04475037-4abe-43a1-ba27-907888160a07,Namespace:calico-system,Attempt:0,}" Nov 5 15:48:14.514767 containerd[1603]: time="2025-11-05T15:48:14.514700214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qtwjt,Uid:3b4571ca-2142-4ad0-85e2-e8b00b2fb524,Namespace:calico-system,Attempt:0,}" Nov 5 15:48:14.525746 containerd[1603]: time="2025-11-05T15:48:14.525694363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6644f9d4c6-74nvz,Uid:e61d1196-bf4a-4bdd-877c-9ea9a871d23c,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:48:14.535643 containerd[1603]: time="2025-11-05T15:48:14.535554614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fc44484bc-vfb78,Uid:e90854ce-cf6e-4a39-9e2a-1f06e654f065,Namespace:calico-system,Attempt:0,}" Nov 5 15:48:14.537520 containerd[1603]: time="2025-11-05T15:48:14.537488352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6644f9d4c6-px2b8,Uid:d8afeb1c-714d-4335-9a1d-a1135daaa2b3,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:48:14.665497 containerd[1603]: time="2025-11-05T15:48:14.664974134Z" level=error msg="Failed to destroy network for sandbox \"42809d99f17e9c9aaf18162b75085af243ddce478234fd04c55e794e56ade85d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:14.668891 containerd[1603]: time="2025-11-05T15:48:14.668848610Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zgrpw,Uid:121b1dfc-e268-4e0e-8768-d86d30928206,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"42809d99f17e9c9aaf18162b75085af243ddce478234fd04c55e794e56ade85d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:14.669659 kubelet[2780]: E1105 15:48:14.669558 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42809d99f17e9c9aaf18162b75085af243ddce478234fd04c55e794e56ade85d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:14.669954 kubelet[2780]: E1105 15:48:14.669934 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42809d99f17e9c9aaf18162b75085af243ddce478234fd04c55e794e56ade85d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zgrpw" Nov 5 15:48:14.670130 kubelet[2780]: E1105 15:48:14.670038 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42809d99f17e9c9aaf18162b75085af243ddce478234fd04c55e794e56ade85d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zgrpw" Nov 5 15:48:14.670268 kubelet[2780]: E1105 15:48:14.670211 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-zgrpw_kube-system(121b1dfc-e268-4e0e-8768-d86d30928206)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-zgrpw_kube-system(121b1dfc-e268-4e0e-8768-d86d30928206)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42809d99f17e9c9aaf18162b75085af243ddce478234fd04c55e794e56ade85d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zgrpw" podUID="121b1dfc-e268-4e0e-8768-d86d30928206" Nov 5 15:48:14.706973 containerd[1603]: time="2025-11-05T15:48:14.706920762Z" level=error msg="Failed to destroy network for sandbox \"17bf8ddfe61d857d886febf8e62d8b7bc7144d4e2479741a6067fbc53140cf9a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:14.709399 containerd[1603]: time="2025-11-05T15:48:14.709362160Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cbc8c6f7d-qrz8n,Uid:04475037-4abe-43a1-ba27-907888160a07,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"17bf8ddfe61d857d886febf8e62d8b7bc7144d4e2479741a6067fbc53140cf9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:14.710656 kubelet[2780]: E1105 15:48:14.710610 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17bf8ddfe61d857d886febf8e62d8b7bc7144d4e2479741a6067fbc53140cf9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:14.710820 kubelet[2780]: E1105 15:48:14.710680 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17bf8ddfe61d857d886febf8e62d8b7bc7144d4e2479741a6067fbc53140cf9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6cbc8c6f7d-qrz8n" Nov 5 15:48:14.710820 kubelet[2780]: E1105 15:48:14.710701 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17bf8ddfe61d857d886febf8e62d8b7bc7144d4e2479741a6067fbc53140cf9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6cbc8c6f7d-qrz8n" Nov 5 15:48:14.710888 kubelet[2780]: E1105 15:48:14.710825 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6cbc8c6f7d-qrz8n_calico-system(04475037-4abe-43a1-ba27-907888160a07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6cbc8c6f7d-qrz8n_calico-system(04475037-4abe-43a1-ba27-907888160a07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17bf8ddfe61d857d886febf8e62d8b7bc7144d4e2479741a6067fbc53140cf9a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6cbc8c6f7d-qrz8n" podUID="04475037-4abe-43a1-ba27-907888160a07" Nov 5 15:48:14.712846 containerd[1603]: time="2025-11-05T15:48:14.712769906Z" level=error msg="Failed to destroy network for sandbox \"0052b57464ad455b2dfec2e30b61445ddb2fc56950eb3dec9883172d8ef47f8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:14.715154 containerd[1603]: time="2025-11-05T15:48:14.715117954Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-77dvp,Uid:765f74a1-9298-4f11-a1c1-9dfc8dc0f7ac,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0052b57464ad455b2dfec2e30b61445ddb2fc56950eb3dec9883172d8ef47f8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:14.715267 kubelet[2780]: E1105 15:48:14.715251 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0052b57464ad455b2dfec2e30b61445ddb2fc56950eb3dec9883172d8ef47f8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:14.715312 kubelet[2780]: E1105 15:48:14.715285 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0052b57464ad455b2dfec2e30b61445ddb2fc56950eb3dec9883172d8ef47f8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-77dvp" Nov 5 15:48:14.715312 kubelet[2780]: E1105 15:48:14.715303 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0052b57464ad455b2dfec2e30b61445ddb2fc56950eb3dec9883172d8ef47f8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-77dvp" Nov 5 15:48:14.715391 kubelet[2780]: E1105 15:48:14.715331 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-77dvp_kube-system(765f74a1-9298-4f11-a1c1-9dfc8dc0f7ac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-77dvp_kube-system(765f74a1-9298-4f11-a1c1-9dfc8dc0f7ac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0052b57464ad455b2dfec2e30b61445ddb2fc56950eb3dec9883172d8ef47f8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-77dvp" podUID="765f74a1-9298-4f11-a1c1-9dfc8dc0f7ac" Nov 5 15:48:14.728293 containerd[1603]: time="2025-11-05T15:48:14.728147411Z" level=error msg="Failed to destroy network for sandbox \"5a20afd3387db822faf5684aeb0a672b65010f452cf4826d24f77d2e131de9f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:14.732221 containerd[1603]: time="2025-11-05T15:48:14.731664347Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qtwjt,Uid:3b4571ca-2142-4ad0-85e2-e8b00b2fb524,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a20afd3387db822faf5684aeb0a672b65010f452cf4826d24f77d2e131de9f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:14.732806 kubelet[2780]: E1105 15:48:14.732472 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a20afd3387db822faf5684aeb0a672b65010f452cf4826d24f77d2e131de9f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:14.732806 kubelet[2780]: E1105 15:48:14.732664 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a20afd3387db822faf5684aeb0a672b65010f452cf4826d24f77d2e131de9f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-qtwjt" Nov 5 15:48:14.733021 containerd[1603]: time="2025-11-05T15:48:14.732661986Z" level=error msg="Failed to destroy network for sandbox \"2dd2a4ecc6b5d19c561b89b3f5aa5c372b6d4e45296830411399ecb4fdb767eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:14.733263 kubelet[2780]: E1105 15:48:14.733141 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a20afd3387db822faf5684aeb0a672b65010f452cf4826d24f77d2e131de9f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-qtwjt" Nov 5 15:48:14.733692 kubelet[2780]: E1105 15:48:14.733530 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-qtwjt_calico-system(3b4571ca-2142-4ad0-85e2-e8b00b2fb524)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-qtwjt_calico-system(3b4571ca-2142-4ad0-85e2-e8b00b2fb524)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a20afd3387db822faf5684aeb0a672b65010f452cf4826d24f77d2e131de9f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-qtwjt" podUID="3b4571ca-2142-4ad0-85e2-e8b00b2fb524" Nov 5 15:48:14.733990 containerd[1603]: time="2025-11-05T15:48:14.733927495Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6644f9d4c6-74nvz,Uid:e61d1196-bf4a-4bdd-877c-9ea9a871d23c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dd2a4ecc6b5d19c561b89b3f5aa5c372b6d4e45296830411399ecb4fdb767eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:14.735301 kubelet[2780]: E1105 15:48:14.734099 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dd2a4ecc6b5d19c561b89b3f5aa5c372b6d4e45296830411399ecb4fdb767eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:14.735301 kubelet[2780]: E1105 15:48:14.734132 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dd2a4ecc6b5d19c561b89b3f5aa5c372b6d4e45296830411399ecb4fdb767eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6644f9d4c6-74nvz" Nov 5 15:48:14.735301 kubelet[2780]: E1105 15:48:14.734147 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dd2a4ecc6b5d19c561b89b3f5aa5c372b6d4e45296830411399ecb4fdb767eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6644f9d4c6-74nvz" Nov 5 15:48:14.735642 kubelet[2780]: E1105 15:48:14.735537 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6644f9d4c6-74nvz_calico-apiserver(e61d1196-bf4a-4bdd-877c-9ea9a871d23c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6644f9d4c6-74nvz_calico-apiserver(e61d1196-bf4a-4bdd-877c-9ea9a871d23c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2dd2a4ecc6b5d19c561b89b3f5aa5c372b6d4e45296830411399ecb4fdb767eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-74nvz" podUID="e61d1196-bf4a-4bdd-877c-9ea9a871d23c" Nov 5 15:48:14.744157 containerd[1603]: time="2025-11-05T15:48:14.744090675Z" level=error msg="Failed to destroy network for sandbox \"c9d3c90abf64cb5540d6d58643c6b39ebd57bc092faf0dd0f4da49efe1146208\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:14.745221 containerd[1603]: time="2025-11-05T15:48:14.745182564Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fc44484bc-vfb78,Uid:e90854ce-cf6e-4a39-9e2a-1f06e654f065,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9d3c90abf64cb5540d6d58643c6b39ebd57bc092faf0dd0f4da49efe1146208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:14.745736 kubelet[2780]: E1105 15:48:14.745367 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9d3c90abf64cb5540d6d58643c6b39ebd57bc092faf0dd0f4da49efe1146208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:14.745736 kubelet[2780]: E1105 15:48:14.745413 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9d3c90abf64cb5540d6d58643c6b39ebd57bc092faf0dd0f4da49efe1146208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5fc44484bc-vfb78" Nov 5 15:48:14.745736 kubelet[2780]: E1105 15:48:14.745429 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9d3c90abf64cb5540d6d58643c6b39ebd57bc092faf0dd0f4da49efe1146208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5fc44484bc-vfb78" Nov 5 15:48:14.745987 kubelet[2780]: E1105 15:48:14.745464 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5fc44484bc-vfb78_calico-system(e90854ce-cf6e-4a39-9e2a-1f06e654f065)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5fc44484bc-vfb78_calico-system(e90854ce-cf6e-4a39-9e2a-1f06e654f065)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c9d3c90abf64cb5540d6d58643c6b39ebd57bc092faf0dd0f4da49efe1146208\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5fc44484bc-vfb78" podUID="e90854ce-cf6e-4a39-9e2a-1f06e654f065" Nov 5 15:48:14.748080 containerd[1603]: time="2025-11-05T15:48:14.748052711Z" level=error msg="Failed to destroy network for sandbox \"bdb1e67febadd1fa63fbc13bb8cfa8552e5b36d2aca2f791759df30c55af2093\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:14.749322 containerd[1603]: time="2025-11-05T15:48:14.749254670Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6644f9d4c6-px2b8,Uid:d8afeb1c-714d-4335-9a1d-a1135daaa2b3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdb1e67febadd1fa63fbc13bb8cfa8552e5b36d2aca2f791759df30c55af2093\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:14.749616 kubelet[2780]: E1105 15:48:14.749584 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdb1e67febadd1fa63fbc13bb8cfa8552e5b36d2aca2f791759df30c55af2093\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:48:14.749675 kubelet[2780]: E1105 15:48:14.749632 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdb1e67febadd1fa63fbc13bb8cfa8552e5b36d2aca2f791759df30c55af2093\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6644f9d4c6-px2b8" Nov 5 15:48:14.749675 kubelet[2780]: E1105 15:48:14.749656 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdb1e67febadd1fa63fbc13bb8cfa8552e5b36d2aca2f791759df30c55af2093\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6644f9d4c6-px2b8" Nov 5 15:48:14.750193 kubelet[2780]: E1105 15:48:14.750156 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6644f9d4c6-px2b8_calico-apiserver(d8afeb1c-714d-4335-9a1d-a1135daaa2b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6644f9d4c6-px2b8_calico-apiserver(d8afeb1c-714d-4335-9a1d-a1135daaa2b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bdb1e67febadd1fa63fbc13bb8cfa8552e5b36d2aca2f791759df30c55af2093\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-px2b8" podUID="d8afeb1c-714d-4335-9a1d-a1135daaa2b3" Nov 5 15:48:15.403540 systemd[1]: run-netns-cni\x2d9b50c5ae\x2d8868\x2de3a5\x2de006\x2df45927118c6f.mount: Deactivated successfully. Nov 5 15:48:15.403818 systemd[1]: run-netns-cni\x2deeb5654c\x2d82fc\x2db6de\x2d10aa\x2d9e44e9137786.mount: Deactivated successfully. Nov 5 15:48:15.404010 systemd[1]: run-netns-cni\x2d7cf04a25\x2de463\x2dffcc\x2da67f\x2d6fb6a5a485b9.mount: Deactivated successfully. Nov 5 15:48:18.031947 kubelet[2780]: I1105 15:48:18.031797 2780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 15:48:18.033462 kubelet[2780]: E1105 15:48:18.033116 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:48:18.098499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2357250240.mount: Deactivated successfully. Nov 5 15:48:18.128688 containerd[1603]: time="2025-11-05T15:48:18.128139901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:18.128688 containerd[1603]: time="2025-11-05T15:48:18.128664630Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 5 15:48:18.129159 containerd[1603]: time="2025-11-05T15:48:18.129139780Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:18.130469 containerd[1603]: time="2025-11-05T15:48:18.130449619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:48:18.130844 containerd[1603]: time="2025-11-05T15:48:18.130814438Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 3.711350528s" Nov 5 15:48:18.130887 containerd[1603]: time="2025-11-05T15:48:18.130846408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 5 15:48:18.146755 containerd[1603]: time="2025-11-05T15:48:18.146699312Z" level=info msg="CreateContainer within sandbox \"4143999e99c580ef80aebaebcd9eaf15e733e7b4dee12e04acfa12795be26ddb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 5 15:48:18.160473 containerd[1603]: time="2025-11-05T15:48:18.160047549Z" level=info msg="Container eda5c7ca199d3457cdfe11ffbafed3e25a312cad63fa0f4e3c46540755edf51d: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:48:18.163138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount135289363.mount: Deactivated successfully. Nov 5 15:48:18.170819 containerd[1603]: time="2025-11-05T15:48:18.170783108Z" level=info msg="CreateContainer within sandbox \"4143999e99c580ef80aebaebcd9eaf15e733e7b4dee12e04acfa12795be26ddb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"eda5c7ca199d3457cdfe11ffbafed3e25a312cad63fa0f4e3c46540755edf51d\"" Nov 5 15:48:18.172044 containerd[1603]: time="2025-11-05T15:48:18.172020697Z" level=info msg="StartContainer for \"eda5c7ca199d3457cdfe11ffbafed3e25a312cad63fa0f4e3c46540755edf51d\"" Nov 5 15:48:18.174771 containerd[1603]: time="2025-11-05T15:48:18.174699624Z" level=info msg="connecting to shim eda5c7ca199d3457cdfe11ffbafed3e25a312cad63fa0f4e3c46540755edf51d" address="unix:///run/containerd/s/0f2557fb60739477c762ab1887ef935624f080e176ea39ac440b6e7f87455bce" protocol=ttrpc version=3 Nov 5 15:48:18.234860 systemd[1]: Started cri-containerd-eda5c7ca199d3457cdfe11ffbafed3e25a312cad63fa0f4e3c46540755edf51d.scope - libcontainer container eda5c7ca199d3457cdfe11ffbafed3e25a312cad63fa0f4e3c46540755edf51d. Nov 5 15:48:18.283512 containerd[1603]: time="2025-11-05T15:48:18.283434216Z" level=info msg="StartContainer for \"eda5c7ca199d3457cdfe11ffbafed3e25a312cad63fa0f4e3c46540755edf51d\" returns successfully" Nov 5 15:48:18.386599 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 5 15:48:18.386749 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 5 15:48:18.429802 kubelet[2780]: E1105 15:48:18.429771 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:48:18.430747 kubelet[2780]: E1105 15:48:18.430715 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:48:18.456522 kubelet[2780]: I1105 15:48:18.456438 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-r22v8" podStartSLOduration=1.423528096 podStartE2EDuration="10.456420183s" podCreationTimestamp="2025-11-05 15:48:08 +0000 UTC" firstStartedPulling="2025-11-05 15:48:09.09885975 +0000 UTC m=+21.878094632" lastFinishedPulling="2025-11-05 15:48:18.131751837 +0000 UTC m=+30.910986719" observedRunningTime="2025-11-05 15:48:18.456148683 +0000 UTC m=+31.235383565" watchObservedRunningTime="2025-11-05 15:48:18.456420183 +0000 UTC m=+31.235655065" Nov 5 15:48:18.532744 kubelet[2780]: I1105 15:48:18.531098 2780 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/04475037-4abe-43a1-ba27-907888160a07-whisker-backend-key-pair\") pod \"04475037-4abe-43a1-ba27-907888160a07\" (UID: \"04475037-4abe-43a1-ba27-907888160a07\") " Nov 5 15:48:18.532744 kubelet[2780]: I1105 15:48:18.531137 2780 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9hf5\" (UniqueName: \"kubernetes.io/projected/04475037-4abe-43a1-ba27-907888160a07-kube-api-access-k9hf5\") pod \"04475037-4abe-43a1-ba27-907888160a07\" (UID: \"04475037-4abe-43a1-ba27-907888160a07\") " Nov 5 15:48:18.532744 kubelet[2780]: I1105 15:48:18.531157 2780 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04475037-4abe-43a1-ba27-907888160a07-whisker-ca-bundle\") pod \"04475037-4abe-43a1-ba27-907888160a07\" (UID: \"04475037-4abe-43a1-ba27-907888160a07\") " Nov 5 15:48:18.537129 kubelet[2780]: I1105 15:48:18.537054 2780 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04475037-4abe-43a1-ba27-907888160a07-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "04475037-4abe-43a1-ba27-907888160a07" (UID: "04475037-4abe-43a1-ba27-907888160a07"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 15:48:18.549457 kubelet[2780]: I1105 15:48:18.549426 2780 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04475037-4abe-43a1-ba27-907888160a07-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "04475037-4abe-43a1-ba27-907888160a07" (UID: "04475037-4abe-43a1-ba27-907888160a07"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 15:48:18.549541 kubelet[2780]: I1105 15:48:18.549506 2780 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04475037-4abe-43a1-ba27-907888160a07-kube-api-access-k9hf5" (OuterVolumeSpecName: "kube-api-access-k9hf5") pod "04475037-4abe-43a1-ba27-907888160a07" (UID: "04475037-4abe-43a1-ba27-907888160a07"). InnerVolumeSpecName "kube-api-access-k9hf5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 15:48:18.630630 containerd[1603]: time="2025-11-05T15:48:18.630571059Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eda5c7ca199d3457cdfe11ffbafed3e25a312cad63fa0f4e3c46540755edf51d\" id:\"16c5bc4379af74a3f8df0e8d98afb44eddf63b54bb89870a1a33711bb0bf4fad\" pid:3863 exit_status:1 exited_at:{seconds:1762357698 nanos:629988579}" Nov 5 15:48:18.631970 kubelet[2780]: I1105 15:48:18.631926 2780 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/04475037-4abe-43a1-ba27-907888160a07-whisker-backend-key-pair\") on node \"172-239-60-160\" DevicePath \"\"" Nov 5 15:48:18.631970 kubelet[2780]: I1105 15:48:18.631957 2780 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k9hf5\" (UniqueName: \"kubernetes.io/projected/04475037-4abe-43a1-ba27-907888160a07-kube-api-access-k9hf5\") on node \"172-239-60-160\" DevicePath \"\"" Nov 5 15:48:18.631970 kubelet[2780]: I1105 15:48:18.631968 2780 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04475037-4abe-43a1-ba27-907888160a07-whisker-ca-bundle\") on node \"172-239-60-160\" DevicePath \"\"" Nov 5 15:48:19.098294 systemd[1]: var-lib-kubelet-pods-04475037\x2d4abe\x2d43a1\x2dba27\x2d907888160a07-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk9hf5.mount: Deactivated successfully. Nov 5 15:48:19.098400 systemd[1]: var-lib-kubelet-pods-04475037\x2d4abe\x2d43a1\x2dba27\x2d907888160a07-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 5 15:48:19.327120 systemd[1]: Removed slice kubepods-besteffort-pod04475037_4abe_43a1_ba27_907888160a07.slice - libcontainer container kubepods-besteffort-pod04475037_4abe_43a1_ba27_907888160a07.slice. Nov 5 15:48:19.433996 kubelet[2780]: E1105 15:48:19.433608 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:48:19.500917 systemd[1]: Created slice kubepods-besteffort-pod1ae08b84_bd20_47be_a4e3_39130515cbd3.slice - libcontainer container kubepods-besteffort-pod1ae08b84_bd20_47be_a4e3_39130515cbd3.slice. Nov 5 15:48:19.537658 kubelet[2780]: I1105 15:48:19.537612 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knk57\" (UniqueName: \"kubernetes.io/projected/1ae08b84-bd20-47be-a4e3-39130515cbd3-kube-api-access-knk57\") pod \"whisker-5d78f875fb-jwnjz\" (UID: \"1ae08b84-bd20-47be-a4e3-39130515cbd3\") " pod="calico-system/whisker-5d78f875fb-jwnjz" Nov 5 15:48:19.538283 kubelet[2780]: I1105 15:48:19.538228 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1ae08b84-bd20-47be-a4e3-39130515cbd3-whisker-backend-key-pair\") pod \"whisker-5d78f875fb-jwnjz\" (UID: \"1ae08b84-bd20-47be-a4e3-39130515cbd3\") " pod="calico-system/whisker-5d78f875fb-jwnjz" Nov 5 15:48:19.538398 kubelet[2780]: I1105 15:48:19.538379 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ae08b84-bd20-47be-a4e3-39130515cbd3-whisker-ca-bundle\") pod \"whisker-5d78f875fb-jwnjz\" (UID: \"1ae08b84-bd20-47be-a4e3-39130515cbd3\") " pod="calico-system/whisker-5d78f875fb-jwnjz" Nov 5 15:48:19.569199 containerd[1603]: time="2025-11-05T15:48:19.569160380Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eda5c7ca199d3457cdfe11ffbafed3e25a312cad63fa0f4e3c46540755edf51d\" id:\"c642184ab67392584965bbeebcc7c37e23b467d9e924569fe5286995c97cdb55\" pid:3908 exit_status:1 exited_at:{seconds:1762357699 nanos:566839112}" Nov 5 15:48:19.808760 containerd[1603]: time="2025-11-05T15:48:19.807199832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d78f875fb-jwnjz,Uid:1ae08b84-bd20-47be-a4e3-39130515cbd3,Namespace:calico-system,Attempt:0,}" Nov 5 15:48:20.034591 systemd-networkd[1515]: calie917f706d4b: Link UP Nov 5 15:48:20.034876 systemd-networkd[1515]: calie917f706d4b: Gained carrier Nov 5 15:48:20.051515 containerd[1603]: 2025-11-05 15:48:19.849 [INFO][3932] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 15:48:20.051515 containerd[1603]: 2025-11-05 15:48:19.906 [INFO][3932] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--60--160-k8s-whisker--5d78f875fb--jwnjz-eth0 whisker-5d78f875fb- calico-system 1ae08b84-bd20-47be-a4e3-39130515cbd3 880 0 2025-11-05 15:48:19 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5d78f875fb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-239-60-160 whisker-5d78f875fb-jwnjz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie917f706d4b [] [] }} ContainerID="7b2988d2572544a91f9b1ebaf2699a3ba7c2af596ba5c1ecdd741df203e717ce" Namespace="calico-system" Pod="whisker-5d78f875fb-jwnjz" WorkloadEndpoint="172--239--60--160-k8s-whisker--5d78f875fb--jwnjz-" Nov 5 15:48:20.051515 containerd[1603]: 2025-11-05 15:48:19.908 [INFO][3932] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7b2988d2572544a91f9b1ebaf2699a3ba7c2af596ba5c1ecdd741df203e717ce" Namespace="calico-system" Pod="whisker-5d78f875fb-jwnjz" WorkloadEndpoint="172--239--60--160-k8s-whisker--5d78f875fb--jwnjz-eth0" Nov 5 15:48:20.051515 containerd[1603]: 2025-11-05 15:48:19.959 [INFO][3993] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b2988d2572544a91f9b1ebaf2699a3ba7c2af596ba5c1ecdd741df203e717ce" HandleID="k8s-pod-network.7b2988d2572544a91f9b1ebaf2699a3ba7c2af596ba5c1ecdd741df203e717ce" Workload="172--239--60--160-k8s-whisker--5d78f875fb--jwnjz-eth0" Nov 5 15:48:20.052826 containerd[1603]: 2025-11-05 15:48:19.960 [INFO][3993] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7b2988d2572544a91f9b1ebaf2699a3ba7c2af596ba5c1ecdd741df203e717ce" HandleID="k8s-pod-network.7b2988d2572544a91f9b1ebaf2699a3ba7c2af596ba5c1ecdd741df203e717ce" Workload="172--239--60--160-k8s-whisker--5d78f875fb--jwnjz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cafa0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-60-160", "pod":"whisker-5d78f875fb-jwnjz", "timestamp":"2025-11-05 15:48:19.959678449 +0000 UTC"}, Hostname:"172-239-60-160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:48:20.052826 containerd[1603]: 2025-11-05 15:48:19.960 [INFO][3993] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:48:20.052826 containerd[1603]: 2025-11-05 15:48:19.960 [INFO][3993] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:48:20.052826 containerd[1603]: 2025-11-05 15:48:19.960 [INFO][3993] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-60-160' Nov 5 15:48:20.052826 containerd[1603]: 2025-11-05 15:48:19.969 [INFO][3993] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7b2988d2572544a91f9b1ebaf2699a3ba7c2af596ba5c1ecdd741df203e717ce" host="172-239-60-160" Nov 5 15:48:20.052826 containerd[1603]: 2025-11-05 15:48:19.974 [INFO][3993] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-60-160" Nov 5 15:48:20.052826 containerd[1603]: 2025-11-05 15:48:19.980 [INFO][3993] ipam/ipam.go 511: Trying affinity for 192.168.50.0/26 host="172-239-60-160" Nov 5 15:48:20.052826 containerd[1603]: 2025-11-05 15:48:19.982 [INFO][3993] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.0/26 host="172-239-60-160" Nov 5 15:48:20.052826 containerd[1603]: 2025-11-05 15:48:19.986 [INFO][3993] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="172-239-60-160" Nov 5 15:48:20.052826 containerd[1603]: 2025-11-05 15:48:19.986 [INFO][3993] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.7b2988d2572544a91f9b1ebaf2699a3ba7c2af596ba5c1ecdd741df203e717ce" host="172-239-60-160" Nov 5 15:48:20.053113 containerd[1603]: 2025-11-05 15:48:19.992 [INFO][3993] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7b2988d2572544a91f9b1ebaf2699a3ba7c2af596ba5c1ecdd741df203e717ce Nov 5 15:48:20.053113 containerd[1603]: 2025-11-05 15:48:19.997 [INFO][3993] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.7b2988d2572544a91f9b1ebaf2699a3ba7c2af596ba5c1ecdd741df203e717ce" host="172-239-60-160" Nov 5 15:48:20.053113 containerd[1603]: 2025-11-05 15:48:20.003 [INFO][3993] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.1/26] block=192.168.50.0/26 handle="k8s-pod-network.7b2988d2572544a91f9b1ebaf2699a3ba7c2af596ba5c1ecdd741df203e717ce" host="172-239-60-160" Nov 5 15:48:20.053113 containerd[1603]: 2025-11-05 15:48:20.004 [INFO][3993] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.1/26] handle="k8s-pod-network.7b2988d2572544a91f9b1ebaf2699a3ba7c2af596ba5c1ecdd741df203e717ce" host="172-239-60-160" Nov 5 15:48:20.053113 containerd[1603]: 2025-11-05 15:48:20.004 [INFO][3993] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:48:20.053113 containerd[1603]: 2025-11-05 15:48:20.004 [INFO][3993] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.1/26] IPv6=[] ContainerID="7b2988d2572544a91f9b1ebaf2699a3ba7c2af596ba5c1ecdd741df203e717ce" HandleID="k8s-pod-network.7b2988d2572544a91f9b1ebaf2699a3ba7c2af596ba5c1ecdd741df203e717ce" Workload="172--239--60--160-k8s-whisker--5d78f875fb--jwnjz-eth0" Nov 5 15:48:20.053243 containerd[1603]: 2025-11-05 15:48:20.014 [INFO][3932] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7b2988d2572544a91f9b1ebaf2699a3ba7c2af596ba5c1ecdd741df203e717ce" Namespace="calico-system" Pod="whisker-5d78f875fb-jwnjz" WorkloadEndpoint="172--239--60--160-k8s-whisker--5d78f875fb--jwnjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--60--160-k8s-whisker--5d78f875fb--jwnjz-eth0", GenerateName:"whisker-5d78f875fb-", Namespace:"calico-system", SelfLink:"", UID:"1ae08b84-bd20-47be-a4e3-39130515cbd3", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 48, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d78f875fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-60-160", ContainerID:"", Pod:"whisker-5d78f875fb-jwnjz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.50.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie917f706d4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:48:20.053243 containerd[1603]: 2025-11-05 15:48:20.014 [INFO][3932] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.1/32] ContainerID="7b2988d2572544a91f9b1ebaf2699a3ba7c2af596ba5c1ecdd741df203e717ce" Namespace="calico-system" Pod="whisker-5d78f875fb-jwnjz" WorkloadEndpoint="172--239--60--160-k8s-whisker--5d78f875fb--jwnjz-eth0" Nov 5 15:48:20.053315 containerd[1603]: 2025-11-05 15:48:20.014 [INFO][3932] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie917f706d4b ContainerID="7b2988d2572544a91f9b1ebaf2699a3ba7c2af596ba5c1ecdd741df203e717ce" Namespace="calico-system" Pod="whisker-5d78f875fb-jwnjz" WorkloadEndpoint="172--239--60--160-k8s-whisker--5d78f875fb--jwnjz-eth0" Nov 5 15:48:20.053315 containerd[1603]: 2025-11-05 15:48:20.028 [INFO][3932] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b2988d2572544a91f9b1ebaf2699a3ba7c2af596ba5c1ecdd741df203e717ce" Namespace="calico-system" Pod="whisker-5d78f875fb-jwnjz" WorkloadEndpoint="172--239--60--160-k8s-whisker--5d78f875fb--jwnjz-eth0" Nov 5 15:48:20.055788 containerd[1603]: 2025-11-05 15:48:20.028 [INFO][3932] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7b2988d2572544a91f9b1ebaf2699a3ba7c2af596ba5c1ecdd741df203e717ce" Namespace="calico-system" Pod="whisker-5d78f875fb-jwnjz" WorkloadEndpoint="172--239--60--160-k8s-whisker--5d78f875fb--jwnjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--60--160-k8s-whisker--5d78f875fb--jwnjz-eth0", GenerateName:"whisker-5d78f875fb-", Namespace:"calico-system", SelfLink:"", UID:"1ae08b84-bd20-47be-a4e3-39130515cbd3", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 48, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d78f875fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-60-160", ContainerID:"7b2988d2572544a91f9b1ebaf2699a3ba7c2af596ba5c1ecdd741df203e717ce", Pod:"whisker-5d78f875fb-jwnjz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.50.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie917f706d4b", MAC:"d6:f9:3f:b5:2a:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:48:20.055856 containerd[1603]: 2025-11-05 15:48:20.042 [INFO][3932] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7b2988d2572544a91f9b1ebaf2699a3ba7c2af596ba5c1ecdd741df203e717ce" Namespace="calico-system" Pod="whisker-5d78f875fb-jwnjz" WorkloadEndpoint="172--239--60--160-k8s-whisker--5d78f875fb--jwnjz-eth0" Nov 5 15:48:20.142086 containerd[1603]: time="2025-11-05T15:48:20.141962537Z" level=info msg="connecting to shim 7b2988d2572544a91f9b1ebaf2699a3ba7c2af596ba5c1ecdd741df203e717ce" address="unix:///run/containerd/s/a11d5c1272cd8c9bdc9db55ae67a384c8db5b1a5eb7c193d6c639a3d90731691" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:48:20.198195 systemd[1]: Started cri-containerd-7b2988d2572544a91f9b1ebaf2699a3ba7c2af596ba5c1ecdd741df203e717ce.scope - libcontainer container 7b2988d2572544a91f9b1ebaf2699a3ba7c2af596ba5c1ecdd741df203e717ce. Nov 5 15:48:20.301295 containerd[1603]: time="2025-11-05T15:48:20.301250468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d78f875fb-jwnjz,Uid:1ae08b84-bd20-47be-a4e3-39130515cbd3,Namespace:calico-system,Attempt:0,} returns sandbox id \"7b2988d2572544a91f9b1ebaf2699a3ba7c2af596ba5c1ecdd741df203e717ce\"" Nov 5 15:48:20.304764 containerd[1603]: time="2025-11-05T15:48:20.304229405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:48:20.497956 containerd[1603]: time="2025-11-05T15:48:20.497246292Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:48:20.500274 containerd[1603]: time="2025-11-05T15:48:20.499961529Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:48:20.500274 containerd[1603]: time="2025-11-05T15:48:20.499994129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:48:20.501269 kubelet[2780]: E1105 15:48:20.500990 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:48:20.503246 kubelet[2780]: E1105 15:48:20.502008 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:48:20.511424 kubelet[2780]: E1105 15:48:20.511279 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3624bb609f634714aab3714467b41e19,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-knk57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d78f875fb-jwnjz_calico-system(1ae08b84-bd20-47be-a4e3-39130515cbd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:48:20.515853 containerd[1603]: time="2025-11-05T15:48:20.515496784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:48:20.626971 systemd-networkd[1515]: vxlan.calico: Link UP Nov 5 15:48:20.626981 systemd-networkd[1515]: vxlan.calico: Gained carrier Nov 5 15:48:20.948472 containerd[1603]: time="2025-11-05T15:48:20.948336861Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:48:20.949958 containerd[1603]: time="2025-11-05T15:48:20.949908639Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:48:20.949958 containerd[1603]: time="2025-11-05T15:48:20.949934459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:48:20.950506 kubelet[2780]: E1105 15:48:20.950118 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:48:20.950506 kubelet[2780]: E1105 15:48:20.950169 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:48:20.950580 kubelet[2780]: E1105 15:48:20.950270 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-knk57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d78f875fb-jwnjz_calico-system(1ae08b84-bd20-47be-a4e3-39130515cbd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:48:20.956942 kubelet[2780]: E1105 15:48:20.956905 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d78f875fb-jwnjz" podUID="1ae08b84-bd20-47be-a4e3-39130515cbd3" Nov 5 15:48:21.316047 kubelet[2780]: I1105 15:48:21.315982 2780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04475037-4abe-43a1-ba27-907888160a07" path="/var/lib/kubelet/pods/04475037-4abe-43a1-ba27-907888160a07/volumes" Nov 5 15:48:21.439821 kubelet[2780]: E1105 15:48:21.439717 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d78f875fb-jwnjz" podUID="1ae08b84-bd20-47be-a4e3-39130515cbd3" Nov 5 15:48:21.548023 systemd-networkd[1515]: calie917f706d4b: Gained IPv6LL Nov 5 15:48:22.507241 systemd-networkd[1515]: vxlan.calico: Gained IPv6LL Nov 5 15:48:25.314158 containerd[1603]: time="2025-11-05T15:48:25.313116146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qtwjt,Uid:3b4571ca-2142-4ad0-85e2-e8b00b2fb524,Namespace:calico-system,Attempt:0,}" Nov 5 15:48:25.315596 containerd[1603]: time="2025-11-05T15:48:25.315227884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6644f9d4c6-px2b8,Uid:d8afeb1c-714d-4335-9a1d-a1135daaa2b3,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:48:25.315596 containerd[1603]: time="2025-11-05T15:48:25.315390474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fc44484bc-vfb78,Uid:e90854ce-cf6e-4a39-9e2a-1f06e654f065,Namespace:calico-system,Attempt:0,}" Nov 5 15:48:25.496835 systemd-networkd[1515]: calicd4bb605c05: Link UP Nov 5 15:48:25.497085 systemd-networkd[1515]: calicd4bb605c05: Gained carrier Nov 5 15:48:25.525331 containerd[1603]: 2025-11-05 15:48:25.416 [INFO][4191] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--60--160-k8s-goldmane--666569f655--qtwjt-eth0 goldmane-666569f655- calico-system 3b4571ca-2142-4ad0-85e2-e8b00b2fb524 803 0 2025-11-05 15:48:07 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-239-60-160 goldmane-666569f655-qtwjt eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calicd4bb605c05 [] [] }} ContainerID="3d004915c19ab10818b1bfe7d6c0cccd2c91a79af74d61904a5dbb997292471d" Namespace="calico-system" Pod="goldmane-666569f655-qtwjt" WorkloadEndpoint="172--239--60--160-k8s-goldmane--666569f655--qtwjt-" Nov 5 15:48:25.525331 containerd[1603]: 2025-11-05 15:48:25.417 [INFO][4191] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3d004915c19ab10818b1bfe7d6c0cccd2c91a79af74d61904a5dbb997292471d" Namespace="calico-system" Pod="goldmane-666569f655-qtwjt" WorkloadEndpoint="172--239--60--160-k8s-goldmane--666569f655--qtwjt-eth0" Nov 5 15:48:25.525331 containerd[1603]: 2025-11-05 15:48:25.449 [INFO][4229] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d004915c19ab10818b1bfe7d6c0cccd2c91a79af74d61904a5dbb997292471d" HandleID="k8s-pod-network.3d004915c19ab10818b1bfe7d6c0cccd2c91a79af74d61904a5dbb997292471d" Workload="172--239--60--160-k8s-goldmane--666569f655--qtwjt-eth0" Nov 5 15:48:25.525563 containerd[1603]: 2025-11-05 15:48:25.452 [INFO][4229] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3d004915c19ab10818b1bfe7d6c0cccd2c91a79af74d61904a5dbb997292471d" HandleID="k8s-pod-network.3d004915c19ab10818b1bfe7d6c0cccd2c91a79af74d61904a5dbb997292471d" Workload="172--239--60--160-k8s-goldmane--666569f655--qtwjt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd660), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-60-160", "pod":"goldmane-666569f655-qtwjt", "timestamp":"2025-11-05 15:48:25.449959309 +0000 UTC"}, Hostname:"172-239-60-160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:48:25.525563 containerd[1603]: 2025-11-05 15:48:25.452 [INFO][4229] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:48:25.525563 containerd[1603]: 2025-11-05 15:48:25.452 [INFO][4229] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:48:25.525563 containerd[1603]: 2025-11-05 15:48:25.452 [INFO][4229] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-60-160' Nov 5 15:48:25.525563 containerd[1603]: 2025-11-05 15:48:25.459 [INFO][4229] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3d004915c19ab10818b1bfe7d6c0cccd2c91a79af74d61904a5dbb997292471d" host="172-239-60-160" Nov 5 15:48:25.525563 containerd[1603]: 2025-11-05 15:48:25.465 [INFO][4229] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-60-160" Nov 5 15:48:25.525563 containerd[1603]: 2025-11-05 15:48:25.469 [INFO][4229] ipam/ipam.go 511: Trying affinity for 192.168.50.0/26 host="172-239-60-160" Nov 5 15:48:25.525563 containerd[1603]: 2025-11-05 15:48:25.471 [INFO][4229] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.0/26 host="172-239-60-160" Nov 5 15:48:25.525563 containerd[1603]: 2025-11-05 15:48:25.474 [INFO][4229] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="172-239-60-160" Nov 5 15:48:25.525563 containerd[1603]: 2025-11-05 15:48:25.474 [INFO][4229] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.3d004915c19ab10818b1bfe7d6c0cccd2c91a79af74d61904a5dbb997292471d" host="172-239-60-160" Nov 5 15:48:25.526363 containerd[1603]: 2025-11-05 15:48:25.475 [INFO][4229] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3d004915c19ab10818b1bfe7d6c0cccd2c91a79af74d61904a5dbb997292471d Nov 5 15:48:25.526363 containerd[1603]: 2025-11-05 15:48:25.479 [INFO][4229] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.3d004915c19ab10818b1bfe7d6c0cccd2c91a79af74d61904a5dbb997292471d" host="172-239-60-160" Nov 5 15:48:25.526363 containerd[1603]: 2025-11-05 15:48:25.484 [INFO][4229] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.2/26] block=192.168.50.0/26 handle="k8s-pod-network.3d004915c19ab10818b1bfe7d6c0cccd2c91a79af74d61904a5dbb997292471d" host="172-239-60-160" Nov 5 15:48:25.526363 containerd[1603]: 2025-11-05 15:48:25.484 [INFO][4229] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.2/26] handle="k8s-pod-network.3d004915c19ab10818b1bfe7d6c0cccd2c91a79af74d61904a5dbb997292471d" host="172-239-60-160" Nov 5 15:48:25.526363 containerd[1603]: 2025-11-05 15:48:25.484 [INFO][4229] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:48:25.526363 containerd[1603]: 2025-11-05 15:48:25.484 [INFO][4229] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.2/26] IPv6=[] ContainerID="3d004915c19ab10818b1bfe7d6c0cccd2c91a79af74d61904a5dbb997292471d" HandleID="k8s-pod-network.3d004915c19ab10818b1bfe7d6c0cccd2c91a79af74d61904a5dbb997292471d" Workload="172--239--60--160-k8s-goldmane--666569f655--qtwjt-eth0" Nov 5 15:48:25.526614 containerd[1603]: 2025-11-05 15:48:25.489 [INFO][4191] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3d004915c19ab10818b1bfe7d6c0cccd2c91a79af74d61904a5dbb997292471d" Namespace="calico-system" Pod="goldmane-666569f655-qtwjt" WorkloadEndpoint="172--239--60--160-k8s-goldmane--666569f655--qtwjt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--60--160-k8s-goldmane--666569f655--qtwjt-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"3b4571ca-2142-4ad0-85e2-e8b00b2fb524", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 48, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-60-160", ContainerID:"", Pod:"goldmane-666569f655-qtwjt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.50.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicd4bb605c05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:48:25.526614 containerd[1603]: 2025-11-05 15:48:25.489 [INFO][4191] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.2/32] ContainerID="3d004915c19ab10818b1bfe7d6c0cccd2c91a79af74d61904a5dbb997292471d" Namespace="calico-system" Pod="goldmane-666569f655-qtwjt" WorkloadEndpoint="172--239--60--160-k8s-goldmane--666569f655--qtwjt-eth0" Nov 5 15:48:25.526828 containerd[1603]: 2025-11-05 15:48:25.490 [INFO][4191] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicd4bb605c05 ContainerID="3d004915c19ab10818b1bfe7d6c0cccd2c91a79af74d61904a5dbb997292471d" Namespace="calico-system" Pod="goldmane-666569f655-qtwjt" WorkloadEndpoint="172--239--60--160-k8s-goldmane--666569f655--qtwjt-eth0" Nov 5 15:48:25.526828 containerd[1603]: 2025-11-05 15:48:25.496 [INFO][4191] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3d004915c19ab10818b1bfe7d6c0cccd2c91a79af74d61904a5dbb997292471d" Namespace="calico-system" Pod="goldmane-666569f655-qtwjt" WorkloadEndpoint="172--239--60--160-k8s-goldmane--666569f655--qtwjt-eth0" Nov 5 15:48:25.526929 containerd[1603]: 2025-11-05 15:48:25.498 [INFO][4191] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3d004915c19ab10818b1bfe7d6c0cccd2c91a79af74d61904a5dbb997292471d" Namespace="calico-system" Pod="goldmane-666569f655-qtwjt" WorkloadEndpoint="172--239--60--160-k8s-goldmane--666569f655--qtwjt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--60--160-k8s-goldmane--666569f655--qtwjt-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"3b4571ca-2142-4ad0-85e2-e8b00b2fb524", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 48, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-60-160", ContainerID:"3d004915c19ab10818b1bfe7d6c0cccd2c91a79af74d61904a5dbb997292471d", Pod:"goldmane-666569f655-qtwjt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.50.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicd4bb605c05", MAC:"9e:c7:e3:a5:86:50", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:48:25.527041 containerd[1603]: 2025-11-05 15:48:25.511 [INFO][4191] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3d004915c19ab10818b1bfe7d6c0cccd2c91a79af74d61904a5dbb997292471d" Namespace="calico-system" Pod="goldmane-666569f655-qtwjt" WorkloadEndpoint="172--239--60--160-k8s-goldmane--666569f655--qtwjt-eth0" Nov 5 15:48:25.562640 containerd[1603]: time="2025-11-05T15:48:25.562570876Z" level=info msg="connecting to shim 3d004915c19ab10818b1bfe7d6c0cccd2c91a79af74d61904a5dbb997292471d" address="unix:///run/containerd/s/4c131b6abf8e688c40ce130c521920757649db1c6c7b8c2f94ddc28c099a5e19" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:48:25.610971 systemd[1]: Started cri-containerd-3d004915c19ab10818b1bfe7d6c0cccd2c91a79af74d61904a5dbb997292471d.scope - libcontainer container 3d004915c19ab10818b1bfe7d6c0cccd2c91a79af74d61904a5dbb997292471d. Nov 5 15:48:25.635008 systemd-networkd[1515]: cali1d16b81dc95: Link UP Nov 5 15:48:25.636033 systemd-networkd[1515]: cali1d16b81dc95: Gained carrier Nov 5 15:48:25.662004 containerd[1603]: 2025-11-05 15:48:25.403 [INFO][4186] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--60--160-k8s-calico--apiserver--6644f9d4c6--px2b8-eth0 calico-apiserver-6644f9d4c6- calico-apiserver d8afeb1c-714d-4335-9a1d-a1135daaa2b3 800 0 2025-11-05 15:48:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6644f9d4c6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-239-60-160 calico-apiserver-6644f9d4c6-px2b8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1d16b81dc95 [] [] }} ContainerID="0aff8befce869d6fd8faf15abb850b4b754028d68e69b0993b76ad81a485c87d" Namespace="calico-apiserver" Pod="calico-apiserver-6644f9d4c6-px2b8" WorkloadEndpoint="172--239--60--160-k8s-calico--apiserver--6644f9d4c6--px2b8-" Nov 5 15:48:25.662004 containerd[1603]: 2025-11-05 15:48:25.403 [INFO][4186] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0aff8befce869d6fd8faf15abb850b4b754028d68e69b0993b76ad81a485c87d" Namespace="calico-apiserver" Pod="calico-apiserver-6644f9d4c6-px2b8" WorkloadEndpoint="172--239--60--160-k8s-calico--apiserver--6644f9d4c6--px2b8-eth0" Nov 5 15:48:25.662004 containerd[1603]: 2025-11-05 15:48:25.463 [INFO][4221] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0aff8befce869d6fd8faf15abb850b4b754028d68e69b0993b76ad81a485c87d" HandleID="k8s-pod-network.0aff8befce869d6fd8faf15abb850b4b754028d68e69b0993b76ad81a485c87d" Workload="172--239--60--160-k8s-calico--apiserver--6644f9d4c6--px2b8-eth0" Nov 5 15:48:25.662230 containerd[1603]: 2025-11-05 15:48:25.463 [INFO][4221] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0aff8befce869d6fd8faf15abb850b4b754028d68e69b0993b76ad81a485c87d" HandleID="k8s-pod-network.0aff8befce869d6fd8faf15abb850b4b754028d68e69b0993b76ad81a485c87d" Workload="172--239--60--160-k8s-calico--apiserver--6644f9d4c6--px2b8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ac920), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-239-60-160", "pod":"calico-apiserver-6644f9d4c6-px2b8", "timestamp":"2025-11-05 15:48:25.463660775 +0000 UTC"}, Hostname:"172-239-60-160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:48:25.662230 containerd[1603]: 2025-11-05 15:48:25.463 [INFO][4221] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:48:25.662230 containerd[1603]: 2025-11-05 15:48:25.484 [INFO][4221] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:48:25.662230 containerd[1603]: 2025-11-05 15:48:25.485 [INFO][4221] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-60-160' Nov 5 15:48:25.662230 containerd[1603]: 2025-11-05 15:48:25.563 [INFO][4221] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0aff8befce869d6fd8faf15abb850b4b754028d68e69b0993b76ad81a485c87d" host="172-239-60-160" Nov 5 15:48:25.662230 containerd[1603]: 2025-11-05 15:48:25.571 [INFO][4221] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-60-160" Nov 5 15:48:25.662230 containerd[1603]: 2025-11-05 15:48:25.587 [INFO][4221] ipam/ipam.go 511: Trying affinity for 192.168.50.0/26 host="172-239-60-160" Nov 5 15:48:25.662230 containerd[1603]: 2025-11-05 15:48:25.590 [INFO][4221] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.0/26 host="172-239-60-160" Nov 5 15:48:25.662230 containerd[1603]: 2025-11-05 15:48:25.594 [INFO][4221] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="172-239-60-160" Nov 5 15:48:25.662460 containerd[1603]: 2025-11-05 15:48:25.594 [INFO][4221] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.0aff8befce869d6fd8faf15abb850b4b754028d68e69b0993b76ad81a485c87d" host="172-239-60-160" Nov 5 15:48:25.662460 containerd[1603]: 2025-11-05 15:48:25.598 [INFO][4221] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0aff8befce869d6fd8faf15abb850b4b754028d68e69b0993b76ad81a485c87d Nov 5 15:48:25.662460 containerd[1603]: 2025-11-05 15:48:25.610 [INFO][4221] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.0aff8befce869d6fd8faf15abb850b4b754028d68e69b0993b76ad81a485c87d" host="172-239-60-160" Nov 5 15:48:25.662460 containerd[1603]: 2025-11-05 15:48:25.617 [INFO][4221] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.3/26] block=192.168.50.0/26 handle="k8s-pod-network.0aff8befce869d6fd8faf15abb850b4b754028d68e69b0993b76ad81a485c87d" host="172-239-60-160" Nov 5 15:48:25.662460 containerd[1603]: 2025-11-05 15:48:25.617 [INFO][4221] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.3/26] handle="k8s-pod-network.0aff8befce869d6fd8faf15abb850b4b754028d68e69b0993b76ad81a485c87d" host="172-239-60-160" Nov 5 15:48:25.662460 containerd[1603]: 2025-11-05 15:48:25.618 [INFO][4221] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:48:25.662460 containerd[1603]: 2025-11-05 15:48:25.618 [INFO][4221] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.3/26] IPv6=[] ContainerID="0aff8befce869d6fd8faf15abb850b4b754028d68e69b0993b76ad81a485c87d" HandleID="k8s-pod-network.0aff8befce869d6fd8faf15abb850b4b754028d68e69b0993b76ad81a485c87d" Workload="172--239--60--160-k8s-calico--apiserver--6644f9d4c6--px2b8-eth0" Nov 5 15:48:25.662591 containerd[1603]: 2025-11-05 15:48:25.624 [INFO][4186] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0aff8befce869d6fd8faf15abb850b4b754028d68e69b0993b76ad81a485c87d" Namespace="calico-apiserver" Pod="calico-apiserver-6644f9d4c6-px2b8" WorkloadEndpoint="172--239--60--160-k8s-calico--apiserver--6644f9d4c6--px2b8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--60--160-k8s-calico--apiserver--6644f9d4c6--px2b8-eth0", GenerateName:"calico-apiserver-6644f9d4c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"d8afeb1c-714d-4335-9a1d-a1135daaa2b3", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 48, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6644f9d4c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-60-160", ContainerID:"", Pod:"calico-apiserver-6644f9d4c6-px2b8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1d16b81dc95", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:48:25.662645 containerd[1603]: 2025-11-05 15:48:25.624 [INFO][4186] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.3/32] ContainerID="0aff8befce869d6fd8faf15abb850b4b754028d68e69b0993b76ad81a485c87d" Namespace="calico-apiserver" Pod="calico-apiserver-6644f9d4c6-px2b8" WorkloadEndpoint="172--239--60--160-k8s-calico--apiserver--6644f9d4c6--px2b8-eth0" Nov 5 15:48:25.662645 containerd[1603]: 2025-11-05 15:48:25.624 [INFO][4186] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d16b81dc95 ContainerID="0aff8befce869d6fd8faf15abb850b4b754028d68e69b0993b76ad81a485c87d" Namespace="calico-apiserver" Pod="calico-apiserver-6644f9d4c6-px2b8" WorkloadEndpoint="172--239--60--160-k8s-calico--apiserver--6644f9d4c6--px2b8-eth0" Nov 5 15:48:25.662645 containerd[1603]: 2025-11-05 15:48:25.638 [INFO][4186] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0aff8befce869d6fd8faf15abb850b4b754028d68e69b0993b76ad81a485c87d" Namespace="calico-apiserver" Pod="calico-apiserver-6644f9d4c6-px2b8" WorkloadEndpoint="172--239--60--160-k8s-calico--apiserver--6644f9d4c6--px2b8-eth0" Nov 5 15:48:25.662708 containerd[1603]: 2025-11-05 15:48:25.640 [INFO][4186] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0aff8befce869d6fd8faf15abb850b4b754028d68e69b0993b76ad81a485c87d" Namespace="calico-apiserver" Pod="calico-apiserver-6644f9d4c6-px2b8" WorkloadEndpoint="172--239--60--160-k8s-calico--apiserver--6644f9d4c6--px2b8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--60--160-k8s-calico--apiserver--6644f9d4c6--px2b8-eth0", GenerateName:"calico-apiserver-6644f9d4c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"d8afeb1c-714d-4335-9a1d-a1135daaa2b3", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 48, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6644f9d4c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-60-160", ContainerID:"0aff8befce869d6fd8faf15abb850b4b754028d68e69b0993b76ad81a485c87d", Pod:"calico-apiserver-6644f9d4c6-px2b8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1d16b81dc95", MAC:"fe:55:57:52:45:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:48:25.664060 containerd[1603]: 2025-11-05 15:48:25.656 [INFO][4186] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0aff8befce869d6fd8faf15abb850b4b754028d68e69b0993b76ad81a485c87d" Namespace="calico-apiserver" Pod="calico-apiserver-6644f9d4c6-px2b8" WorkloadEndpoint="172--239--60--160-k8s-calico--apiserver--6644f9d4c6--px2b8-eth0" Nov 5 15:48:25.727228 containerd[1603]: time="2025-11-05T15:48:25.727143562Z" level=info msg="connecting to shim 0aff8befce869d6fd8faf15abb850b4b754028d68e69b0993b76ad81a485c87d" address="unix:///run/containerd/s/f62446a2df6171059e93066fdca6454285ea9f6f0b7bf0e92b765da04d57b04a" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:48:25.736929 containerd[1603]: time="2025-11-05T15:48:25.736878142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qtwjt,Uid:3b4571ca-2142-4ad0-85e2-e8b00b2fb524,Namespace:calico-system,Attempt:0,} returns sandbox id \"3d004915c19ab10818b1bfe7d6c0cccd2c91a79af74d61904a5dbb997292471d\"" Nov 5 15:48:25.739606 containerd[1603]: time="2025-11-05T15:48:25.739352810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:48:25.754376 systemd-networkd[1515]: cali6c748460735: Link UP Nov 5 15:48:25.757116 systemd-networkd[1515]: cali6c748460735: Gained carrier Nov 5 15:48:25.782533 containerd[1603]: 2025-11-05 15:48:25.414 [INFO][4195] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--60--160-k8s-calico--kube--controllers--5fc44484bc--vfb78-eth0 calico-kube-controllers-5fc44484bc- calico-system e90854ce-cf6e-4a39-9e2a-1f06e654f065 806 0 2025-11-05 15:48:09 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5fc44484bc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-239-60-160 calico-kube-controllers-5fc44484bc-vfb78 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6c748460735 [] [] }} ContainerID="1b00058be827ab481b5edcc0eab78361be188deb2bcd9b8b0275a4efe9650f57" Namespace="calico-system" Pod="calico-kube-controllers-5fc44484bc-vfb78" WorkloadEndpoint="172--239--60--160-k8s-calico--kube--controllers--5fc44484bc--vfb78-" Nov 5 15:48:25.782533 containerd[1603]: 2025-11-05 15:48:25.414 [INFO][4195] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1b00058be827ab481b5edcc0eab78361be188deb2bcd9b8b0275a4efe9650f57" Namespace="calico-system" Pod="calico-kube-controllers-5fc44484bc-vfb78" WorkloadEndpoint="172--239--60--160-k8s-calico--kube--controllers--5fc44484bc--vfb78-eth0" Nov 5 15:48:25.782533 containerd[1603]: 2025-11-05 15:48:25.463 [INFO][4227] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b00058be827ab481b5edcc0eab78361be188deb2bcd9b8b0275a4efe9650f57" HandleID="k8s-pod-network.1b00058be827ab481b5edcc0eab78361be188deb2bcd9b8b0275a4efe9650f57" Workload="172--239--60--160-k8s-calico--kube--controllers--5fc44484bc--vfb78-eth0" Nov 5 15:48:25.782888 containerd[1603]: 2025-11-05 15:48:25.464 [INFO][4227] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1b00058be827ab481b5edcc0eab78361be188deb2bcd9b8b0275a4efe9650f57" HandleID="k8s-pod-network.1b00058be827ab481b5edcc0eab78361be188deb2bcd9b8b0275a4efe9650f57" Workload="172--239--60--160-k8s-calico--kube--controllers--5fc44484bc--vfb78-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5910), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-60-160", "pod":"calico-kube-controllers-5fc44484bc-vfb78", "timestamp":"2025-11-05 15:48:25.463692205 +0000 UTC"}, Hostname:"172-239-60-160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:48:25.782888 containerd[1603]: 2025-11-05 15:48:25.464 [INFO][4227] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:48:25.782888 containerd[1603]: 2025-11-05 15:48:25.618 [INFO][4227] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:48:25.782888 containerd[1603]: 2025-11-05 15:48:25.618 [INFO][4227] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-60-160' Nov 5 15:48:25.782888 containerd[1603]: 2025-11-05 15:48:25.665 [INFO][4227] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1b00058be827ab481b5edcc0eab78361be188deb2bcd9b8b0275a4efe9650f57" host="172-239-60-160" Nov 5 15:48:25.782888 containerd[1603]: 2025-11-05 15:48:25.677 [INFO][4227] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-60-160" Nov 5 15:48:25.782888 containerd[1603]: 2025-11-05 15:48:25.688 [INFO][4227] ipam/ipam.go 511: Trying affinity for 192.168.50.0/26 host="172-239-60-160" Nov 5 15:48:25.782888 containerd[1603]: 2025-11-05 15:48:25.690 [INFO][4227] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.0/26 host="172-239-60-160" Nov 5 15:48:25.782888 containerd[1603]: 2025-11-05 15:48:25.709 [INFO][4227] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="172-239-60-160" Nov 5 15:48:25.783080 containerd[1603]: 2025-11-05 15:48:25.709 [INFO][4227] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.1b00058be827ab481b5edcc0eab78361be188deb2bcd9b8b0275a4efe9650f57" host="172-239-60-160" Nov 5 15:48:25.783080 containerd[1603]: 2025-11-05 15:48:25.711 [INFO][4227] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1b00058be827ab481b5edcc0eab78361be188deb2bcd9b8b0275a4efe9650f57 Nov 5 15:48:25.783080 containerd[1603]: 2025-11-05 15:48:25.719 [INFO][4227] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.1b00058be827ab481b5edcc0eab78361be188deb2bcd9b8b0275a4efe9650f57" host="172-239-60-160" Nov 5 15:48:25.783080 containerd[1603]: 2025-11-05 15:48:25.732 [INFO][4227] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.4/26] block=192.168.50.0/26 handle="k8s-pod-network.1b00058be827ab481b5edcc0eab78361be188deb2bcd9b8b0275a4efe9650f57" host="172-239-60-160" Nov 5 15:48:25.783080 containerd[1603]: 2025-11-05 15:48:25.733 [INFO][4227] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.4/26] handle="k8s-pod-network.1b00058be827ab481b5edcc0eab78361be188deb2bcd9b8b0275a4efe9650f57" host="172-239-60-160" Nov 5 15:48:25.783080 containerd[1603]: 2025-11-05 15:48:25.734 [INFO][4227] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:48:25.783080 containerd[1603]: 2025-11-05 15:48:25.734 [INFO][4227] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.4/26] IPv6=[] ContainerID="1b00058be827ab481b5edcc0eab78361be188deb2bcd9b8b0275a4efe9650f57" HandleID="k8s-pod-network.1b00058be827ab481b5edcc0eab78361be188deb2bcd9b8b0275a4efe9650f57" Workload="172--239--60--160-k8s-calico--kube--controllers--5fc44484bc--vfb78-eth0" Nov 5 15:48:25.783214 containerd[1603]: 2025-11-05 15:48:25.743 [INFO][4195] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1b00058be827ab481b5edcc0eab78361be188deb2bcd9b8b0275a4efe9650f57" Namespace="calico-system" Pod="calico-kube-controllers-5fc44484bc-vfb78" WorkloadEndpoint="172--239--60--160-k8s-calico--kube--controllers--5fc44484bc--vfb78-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--60--160-k8s-calico--kube--controllers--5fc44484bc--vfb78-eth0", GenerateName:"calico-kube-controllers-5fc44484bc-", Namespace:"calico-system", SelfLink:"", UID:"e90854ce-cf6e-4a39-9e2a-1f06e654f065", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 48, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fc44484bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-60-160", ContainerID:"", Pod:"calico-kube-controllers-5fc44484bc-vfb78", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6c748460735", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:48:25.783271 containerd[1603]: 2025-11-05 15:48:25.743 [INFO][4195] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.4/32] ContainerID="1b00058be827ab481b5edcc0eab78361be188deb2bcd9b8b0275a4efe9650f57" Namespace="calico-system" Pod="calico-kube-controllers-5fc44484bc-vfb78" WorkloadEndpoint="172--239--60--160-k8s-calico--kube--controllers--5fc44484bc--vfb78-eth0" Nov 5 15:48:25.783271 containerd[1603]: 2025-11-05 15:48:25.744 [INFO][4195] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6c748460735 ContainerID="1b00058be827ab481b5edcc0eab78361be188deb2bcd9b8b0275a4efe9650f57" Namespace="calico-system" Pod="calico-kube-controllers-5fc44484bc-vfb78" WorkloadEndpoint="172--239--60--160-k8s-calico--kube--controllers--5fc44484bc--vfb78-eth0" Nov 5 15:48:25.783271 containerd[1603]: 2025-11-05 15:48:25.756 [INFO][4195] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b00058be827ab481b5edcc0eab78361be188deb2bcd9b8b0275a4efe9650f57" Namespace="calico-system" Pod="calico-kube-controllers-5fc44484bc-vfb78" WorkloadEndpoint="172--239--60--160-k8s-calico--kube--controllers--5fc44484bc--vfb78-eth0" Nov 5 15:48:25.783332 containerd[1603]: 2025-11-05 15:48:25.758 [INFO][4195] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1b00058be827ab481b5edcc0eab78361be188deb2bcd9b8b0275a4efe9650f57" Namespace="calico-system" Pod="calico-kube-controllers-5fc44484bc-vfb78" WorkloadEndpoint="172--239--60--160-k8s-calico--kube--controllers--5fc44484bc--vfb78-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--60--160-k8s-calico--kube--controllers--5fc44484bc--vfb78-eth0", GenerateName:"calico-kube-controllers-5fc44484bc-", Namespace:"calico-system", SelfLink:"", UID:"e90854ce-cf6e-4a39-9e2a-1f06e654f065", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 48, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fc44484bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-60-160", ContainerID:"1b00058be827ab481b5edcc0eab78361be188deb2bcd9b8b0275a4efe9650f57", Pod:"calico-kube-controllers-5fc44484bc-vfb78", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6c748460735", MAC:"ba:b2:99:00:e5:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:48:25.784169 containerd[1603]: 2025-11-05 15:48:25.776 [INFO][4195] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1b00058be827ab481b5edcc0eab78361be188deb2bcd9b8b0275a4efe9650f57" Namespace="calico-system" Pod="calico-kube-controllers-5fc44484bc-vfb78" WorkloadEndpoint="172--239--60--160-k8s-calico--kube--controllers--5fc44484bc--vfb78-eth0" Nov 5 15:48:25.791038 systemd[1]: Started cri-containerd-0aff8befce869d6fd8faf15abb850b4b754028d68e69b0993b76ad81a485c87d.scope - libcontainer container 0aff8befce869d6fd8faf15abb850b4b754028d68e69b0993b76ad81a485c87d. Nov 5 15:48:25.813517 containerd[1603]: time="2025-11-05T15:48:25.813487266Z" level=info msg="connecting to shim 1b00058be827ab481b5edcc0eab78361be188deb2bcd9b8b0275a4efe9650f57" address="unix:///run/containerd/s/de4f7ab12d44b2b1d9fc8bd38a43375e18f8485e73950b80b0f38dac79647c10" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:48:25.847183 systemd[1]: Started cri-containerd-1b00058be827ab481b5edcc0eab78361be188deb2bcd9b8b0275a4efe9650f57.scope - libcontainer container 1b00058be827ab481b5edcc0eab78361be188deb2bcd9b8b0275a4efe9650f57. Nov 5 15:48:25.876116 containerd[1603]: time="2025-11-05T15:48:25.876006413Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:48:25.877916 containerd[1603]: time="2025-11-05T15:48:25.877637271Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:48:25.878037 containerd[1603]: time="2025-11-05T15:48:25.877718031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:48:25.878392 kubelet[2780]: E1105 15:48:25.878257 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:48:25.878392 kubelet[2780]: E1105 15:48:25.878343 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:48:25.880750 kubelet[2780]: E1105 15:48:25.879686 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ktptk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qtwjt_calico-system(3b4571ca-2142-4ad0-85e2-e8b00b2fb524): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:48:25.881359 kubelet[2780]: E1105 15:48:25.881300 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwjt" podUID="3b4571ca-2142-4ad0-85e2-e8b00b2fb524" Nov 5 15:48:25.884019 containerd[1603]: time="2025-11-05T15:48:25.883995675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6644f9d4c6-px2b8,Uid:d8afeb1c-714d-4335-9a1d-a1135daaa2b3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0aff8befce869d6fd8faf15abb850b4b754028d68e69b0993b76ad81a485c87d\"" Nov 5 15:48:25.892425 containerd[1603]: time="2025-11-05T15:48:25.892403377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:48:25.945156 containerd[1603]: time="2025-11-05T15:48:25.945094414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fc44484bc-vfb78,Uid:e90854ce-cf6e-4a39-9e2a-1f06e654f065,Namespace:calico-system,Attempt:0,} returns sandbox id \"1b00058be827ab481b5edcc0eab78361be188deb2bcd9b8b0275a4efe9650f57\"" Nov 5 15:48:26.038976 containerd[1603]: time="2025-11-05T15:48:26.038931500Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:48:26.040597 containerd[1603]: time="2025-11-05T15:48:26.040433199Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:48:26.040597 containerd[1603]: time="2025-11-05T15:48:26.040521089Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:48:26.041026 kubelet[2780]: E1105 15:48:26.040967 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:48:26.041026 kubelet[2780]: E1105 15:48:26.041023 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:48:26.042614 containerd[1603]: time="2025-11-05T15:48:26.042510127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:48:26.042706 kubelet[2780]: E1105 15:48:26.042293 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nrvlm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6644f9d4c6-px2b8_calico-apiserver(d8afeb1c-714d-4335-9a1d-a1135daaa2b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:48:26.044193 kubelet[2780]: E1105 15:48:26.043905 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-px2b8" podUID="d8afeb1c-714d-4335-9a1d-a1135daaa2b3" Nov 5 15:48:26.180998 containerd[1603]: time="2025-11-05T15:48:26.180802498Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:48:26.182108 containerd[1603]: time="2025-11-05T15:48:26.181918777Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:48:26.182108 containerd[1603]: time="2025-11-05T15:48:26.181972087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:48:26.182360 kubelet[2780]: E1105 15:48:26.182303 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:48:26.182439 kubelet[2780]: E1105 15:48:26.182384 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:48:26.182597 kubelet[2780]: E1105 15:48:26.182541 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtp75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5fc44484bc-vfb78_calico-system(e90854ce-cf6e-4a39-9e2a-1f06e654f065): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:48:26.184872 kubelet[2780]: E1105 15:48:26.184813 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fc44484bc-vfb78" podUID="e90854ce-cf6e-4a39-9e2a-1f06e654f065" Nov 5 15:48:26.312209 kubelet[2780]: E1105 15:48:26.311961 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:48:26.313082 containerd[1603]: time="2025-11-05T15:48:26.312976366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zgrpw,Uid:121b1dfc-e268-4e0e-8768-d86d30928206,Namespace:kube-system,Attempt:0,}" Nov 5 15:48:26.437669 systemd-networkd[1515]: cali810b92a1dd2: Link UP Nov 5 15:48:26.439273 systemd-networkd[1515]: cali810b92a1dd2: Gained carrier Nov 5 15:48:26.457237 containerd[1603]: 2025-11-05 15:48:26.365 [INFO][4424] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--60--160-k8s-coredns--668d6bf9bc--zgrpw-eth0 coredns-668d6bf9bc- kube-system 121b1dfc-e268-4e0e-8768-d86d30928206 807 0 2025-11-05 15:47:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-239-60-160 coredns-668d6bf9bc-zgrpw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali810b92a1dd2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5e017b65b8efb5664b15ec19e84f2c296ed0460f9d5dd76d430e9973ffa5bef6" Namespace="kube-system" Pod="coredns-668d6bf9bc-zgrpw" WorkloadEndpoint="172--239--60--160-k8s-coredns--668d6bf9bc--zgrpw-" Nov 5 15:48:26.457237 containerd[1603]: 2025-11-05 15:48:26.365 [INFO][4424] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5e017b65b8efb5664b15ec19e84f2c296ed0460f9d5dd76d430e9973ffa5bef6" Namespace="kube-system" Pod="coredns-668d6bf9bc-zgrpw" WorkloadEndpoint="172--239--60--160-k8s-coredns--668d6bf9bc--zgrpw-eth0" Nov 5 15:48:26.457237 containerd[1603]: 2025-11-05 15:48:26.396 [INFO][4436] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e017b65b8efb5664b15ec19e84f2c296ed0460f9d5dd76d430e9973ffa5bef6" HandleID="k8s-pod-network.5e017b65b8efb5664b15ec19e84f2c296ed0460f9d5dd76d430e9973ffa5bef6" Workload="172--239--60--160-k8s-coredns--668d6bf9bc--zgrpw-eth0" Nov 5 15:48:26.457797 containerd[1603]: 2025-11-05 15:48:26.396 [INFO][4436] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5e017b65b8efb5664b15ec19e84f2c296ed0460f9d5dd76d430e9973ffa5bef6" HandleID="k8s-pod-network.5e017b65b8efb5664b15ec19e84f2c296ed0460f9d5dd76d430e9973ffa5bef6" Workload="172--239--60--160-k8s-coredns--668d6bf9bc--zgrpw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-239-60-160", "pod":"coredns-668d6bf9bc-zgrpw", "timestamp":"2025-11-05 15:48:26.396519103 +0000 UTC"}, Hostname:"172-239-60-160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:48:26.457797 containerd[1603]: 2025-11-05 15:48:26.396 [INFO][4436] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:48:26.457797 containerd[1603]: 2025-11-05 15:48:26.396 [INFO][4436] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:48:26.457797 containerd[1603]: 2025-11-05 15:48:26.396 [INFO][4436] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-60-160' Nov 5 15:48:26.457797 containerd[1603]: 2025-11-05 15:48:26.403 [INFO][4436] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5e017b65b8efb5664b15ec19e84f2c296ed0460f9d5dd76d430e9973ffa5bef6" host="172-239-60-160" Nov 5 15:48:26.457797 containerd[1603]: 2025-11-05 15:48:26.408 [INFO][4436] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-60-160" Nov 5 15:48:26.457797 containerd[1603]: 2025-11-05 15:48:26.413 [INFO][4436] ipam/ipam.go 511: Trying affinity for 192.168.50.0/26 host="172-239-60-160" Nov 5 15:48:26.457797 containerd[1603]: 2025-11-05 15:48:26.415 [INFO][4436] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.0/26 host="172-239-60-160" Nov 5 15:48:26.457797 containerd[1603]: 2025-11-05 15:48:26.417 [INFO][4436] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="172-239-60-160" Nov 5 15:48:26.457797 containerd[1603]: 2025-11-05 15:48:26.417 [INFO][4436] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.5e017b65b8efb5664b15ec19e84f2c296ed0460f9d5dd76d430e9973ffa5bef6" host="172-239-60-160" Nov 5 15:48:26.459415 containerd[1603]: 2025-11-05 15:48:26.418 [INFO][4436] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5e017b65b8efb5664b15ec19e84f2c296ed0460f9d5dd76d430e9973ffa5bef6 Nov 5 15:48:26.459415 containerd[1603]: 2025-11-05 15:48:26.425 [INFO][4436] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.5e017b65b8efb5664b15ec19e84f2c296ed0460f9d5dd76d430e9973ffa5bef6" host="172-239-60-160" Nov 5 15:48:26.459415 containerd[1603]: 2025-11-05 15:48:26.430 [INFO][4436] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.5/26] block=192.168.50.0/26 handle="k8s-pod-network.5e017b65b8efb5664b15ec19e84f2c296ed0460f9d5dd76d430e9973ffa5bef6" host="172-239-60-160" Nov 5 15:48:26.459415 containerd[1603]: 2025-11-05 15:48:26.430 [INFO][4436] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.5/26] handle="k8s-pod-network.5e017b65b8efb5664b15ec19e84f2c296ed0460f9d5dd76d430e9973ffa5bef6" host="172-239-60-160" Nov 5 15:48:26.459415 containerd[1603]: 2025-11-05 15:48:26.431 [INFO][4436] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:48:26.459415 containerd[1603]: 2025-11-05 15:48:26.431 [INFO][4436] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.5/26] IPv6=[] ContainerID="5e017b65b8efb5664b15ec19e84f2c296ed0460f9d5dd76d430e9973ffa5bef6" HandleID="k8s-pod-network.5e017b65b8efb5664b15ec19e84f2c296ed0460f9d5dd76d430e9973ffa5bef6" Workload="172--239--60--160-k8s-coredns--668d6bf9bc--zgrpw-eth0" Nov 5 15:48:26.460096 containerd[1603]: 2025-11-05 15:48:26.434 [INFO][4424] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5e017b65b8efb5664b15ec19e84f2c296ed0460f9d5dd76d430e9973ffa5bef6" Namespace="kube-system" Pod="coredns-668d6bf9bc-zgrpw" WorkloadEndpoint="172--239--60--160-k8s-coredns--668d6bf9bc--zgrpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--60--160-k8s-coredns--668d6bf9bc--zgrpw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"121b1dfc-e268-4e0e-8768-d86d30928206", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 47, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-60-160", ContainerID:"", Pod:"coredns-668d6bf9bc-zgrpw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali810b92a1dd2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:48:26.460096 containerd[1603]: 2025-11-05 15:48:26.434 [INFO][4424] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.5/32] ContainerID="5e017b65b8efb5664b15ec19e84f2c296ed0460f9d5dd76d430e9973ffa5bef6" Namespace="kube-system" Pod="coredns-668d6bf9bc-zgrpw" WorkloadEndpoint="172--239--60--160-k8s-coredns--668d6bf9bc--zgrpw-eth0" Nov 5 15:48:26.460096 containerd[1603]: 2025-11-05 15:48:26.434 [INFO][4424] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali810b92a1dd2 ContainerID="5e017b65b8efb5664b15ec19e84f2c296ed0460f9d5dd76d430e9973ffa5bef6" Namespace="kube-system" Pod="coredns-668d6bf9bc-zgrpw" WorkloadEndpoint="172--239--60--160-k8s-coredns--668d6bf9bc--zgrpw-eth0" Nov 5 15:48:26.460096 containerd[1603]: 2025-11-05 15:48:26.440 [INFO][4424] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e017b65b8efb5664b15ec19e84f2c296ed0460f9d5dd76d430e9973ffa5bef6" Namespace="kube-system" Pod="coredns-668d6bf9bc-zgrpw" WorkloadEndpoint="172--239--60--160-k8s-coredns--668d6bf9bc--zgrpw-eth0" Nov 5 15:48:26.460096 containerd[1603]: 2025-11-05 15:48:26.440 [INFO][4424] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5e017b65b8efb5664b15ec19e84f2c296ed0460f9d5dd76d430e9973ffa5bef6" Namespace="kube-system" Pod="coredns-668d6bf9bc-zgrpw" WorkloadEndpoint="172--239--60--160-k8s-coredns--668d6bf9bc--zgrpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--60--160-k8s-coredns--668d6bf9bc--zgrpw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"121b1dfc-e268-4e0e-8768-d86d30928206", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 47, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-60-160", ContainerID:"5e017b65b8efb5664b15ec19e84f2c296ed0460f9d5dd76d430e9973ffa5bef6", Pod:"coredns-668d6bf9bc-zgrpw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali810b92a1dd2", MAC:"72:27:12:de:f2:07", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:48:26.460096 containerd[1603]: 2025-11-05 15:48:26.452 [INFO][4424] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5e017b65b8efb5664b15ec19e84f2c296ed0460f9d5dd76d430e9973ffa5bef6" Namespace="kube-system" Pod="coredns-668d6bf9bc-zgrpw" WorkloadEndpoint="172--239--60--160-k8s-coredns--668d6bf9bc--zgrpw-eth0" Nov 5 15:48:26.464861 kubelet[2780]: E1105 15:48:26.464766 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fc44484bc-vfb78" podUID="e90854ce-cf6e-4a39-9e2a-1f06e654f065" Nov 5 15:48:26.469122 kubelet[2780]: E1105 15:48:26.469099 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-px2b8" podUID="d8afeb1c-714d-4335-9a1d-a1135daaa2b3" Nov 5 15:48:26.472875 kubelet[2780]: E1105 15:48:26.472854 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwjt" podUID="3b4571ca-2142-4ad0-85e2-e8b00b2fb524" Nov 5 15:48:26.493433 containerd[1603]: time="2025-11-05T15:48:26.493380086Z" level=info msg="connecting to shim 5e017b65b8efb5664b15ec19e84f2c296ed0460f9d5dd76d430e9973ffa5bef6" address="unix:///run/containerd/s/208f165ef70ad8b38dba1402b3ef65f5513e9ed2ad3066713cfec8edc59611c0" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:48:26.531183 systemd[1]: Started cri-containerd-5e017b65b8efb5664b15ec19e84f2c296ed0460f9d5dd76d430e9973ffa5bef6.scope - libcontainer container 5e017b65b8efb5664b15ec19e84f2c296ed0460f9d5dd76d430e9973ffa5bef6. Nov 5 15:48:26.601750 containerd[1603]: time="2025-11-05T15:48:26.601632917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zgrpw,Uid:121b1dfc-e268-4e0e-8768-d86d30928206,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e017b65b8efb5664b15ec19e84f2c296ed0460f9d5dd76d430e9973ffa5bef6\"" Nov 5 15:48:26.603803 kubelet[2780]: E1105 15:48:26.602609 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:48:26.606791 containerd[1603]: time="2025-11-05T15:48:26.606010463Z" level=info msg="CreateContainer within sandbox \"5e017b65b8efb5664b15ec19e84f2c296ed0460f9d5dd76d430e9973ffa5bef6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 15:48:26.621742 containerd[1603]: time="2025-11-05T15:48:26.620842468Z" level=info msg="Container 8624de76cdc85a5f532bb4a9fe156edc579d451cc4fc19b6568b381bb23f7a0c: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:48:26.632351 containerd[1603]: time="2025-11-05T15:48:26.632134157Z" level=info msg="CreateContainer within sandbox \"5e017b65b8efb5664b15ec19e84f2c296ed0460f9d5dd76d430e9973ffa5bef6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8624de76cdc85a5f532bb4a9fe156edc579d451cc4fc19b6568b381bb23f7a0c\"" Nov 5 15:48:26.633785 containerd[1603]: time="2025-11-05T15:48:26.633757405Z" level=info msg="StartContainer for \"8624de76cdc85a5f532bb4a9fe156edc579d451cc4fc19b6568b381bb23f7a0c\"" Nov 5 15:48:26.639254 containerd[1603]: time="2025-11-05T15:48:26.639156030Z" level=info msg="connecting to shim 8624de76cdc85a5f532bb4a9fe156edc579d451cc4fc19b6568b381bb23f7a0c" address="unix:///run/containerd/s/208f165ef70ad8b38dba1402b3ef65f5513e9ed2ad3066713cfec8edc59611c0" protocol=ttrpc version=3 Nov 5 15:48:26.677003 systemd[1]: Started cri-containerd-8624de76cdc85a5f532bb4a9fe156edc579d451cc4fc19b6568b381bb23f7a0c.scope - libcontainer container 8624de76cdc85a5f532bb4a9fe156edc579d451cc4fc19b6568b381bb23f7a0c. Nov 5 15:48:26.740604 containerd[1603]: time="2025-11-05T15:48:26.740461189Z" level=info msg="StartContainer for \"8624de76cdc85a5f532bb4a9fe156edc579d451cc4fc19b6568b381bb23f7a0c\" returns successfully" Nov 5 15:48:26.923955 systemd-networkd[1515]: calicd4bb605c05: Gained IPv6LL Nov 5 15:48:27.312806 kubelet[2780]: E1105 15:48:27.312354 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:48:27.314148 containerd[1603]: time="2025-11-05T15:48:27.313921785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-77dvp,Uid:765f74a1-9298-4f11-a1c1-9dfc8dc0f7ac,Namespace:kube-system,Attempt:0,}" Nov 5 15:48:27.438413 systemd-networkd[1515]: cali9777b29020b: Link UP Nov 5 15:48:27.439989 systemd-networkd[1515]: cali9777b29020b: Gained carrier Nov 5 15:48:27.457612 containerd[1603]: 2025-11-05 15:48:27.357 [INFO][4535] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--60--160-k8s-coredns--668d6bf9bc--77dvp-eth0 coredns-668d6bf9bc- kube-system 765f74a1-9298-4f11-a1c1-9dfc8dc0f7ac 796 0 2025-11-05 15:47:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-239-60-160 coredns-668d6bf9bc-77dvp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9777b29020b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6cd0602daaaf0e131647b4ef85ff16c32914ab4237de1e1dd431ef0ff18eb4f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-77dvp" WorkloadEndpoint="172--239--60--160-k8s-coredns--668d6bf9bc--77dvp-" Nov 5 15:48:27.457612 containerd[1603]: 2025-11-05 15:48:27.357 [INFO][4535] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6cd0602daaaf0e131647b4ef85ff16c32914ab4237de1e1dd431ef0ff18eb4f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-77dvp" WorkloadEndpoint="172--239--60--160-k8s-coredns--668d6bf9bc--77dvp-eth0" Nov 5 15:48:27.457612 containerd[1603]: 2025-11-05 15:48:27.385 [INFO][4548] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6cd0602daaaf0e131647b4ef85ff16c32914ab4237de1e1dd431ef0ff18eb4f1" HandleID="k8s-pod-network.6cd0602daaaf0e131647b4ef85ff16c32914ab4237de1e1dd431ef0ff18eb4f1" Workload="172--239--60--160-k8s-coredns--668d6bf9bc--77dvp-eth0" Nov 5 15:48:27.457612 containerd[1603]: 2025-11-05 15:48:27.385 [INFO][4548] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6cd0602daaaf0e131647b4ef85ff16c32914ab4237de1e1dd431ef0ff18eb4f1" HandleID="k8s-pod-network.6cd0602daaaf0e131647b4ef85ff16c32914ab4237de1e1dd431ef0ff18eb4f1" Workload="172--239--60--160-k8s-coredns--668d6bf9bc--77dvp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f200), Attrs:map[string]string{"namespace":"kube-system", "node":"172-239-60-160", "pod":"coredns-668d6bf9bc-77dvp", "timestamp":"2025-11-05 15:48:27.385854853 +0000 UTC"}, Hostname:"172-239-60-160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:48:27.457612 containerd[1603]: 2025-11-05 15:48:27.386 [INFO][4548] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:48:27.457612 containerd[1603]: 2025-11-05 15:48:27.386 [INFO][4548] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:48:27.457612 containerd[1603]: 2025-11-05 15:48:27.386 [INFO][4548] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-60-160' Nov 5 15:48:27.457612 containerd[1603]: 2025-11-05 15:48:27.399 [INFO][4548] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6cd0602daaaf0e131647b4ef85ff16c32914ab4237de1e1dd431ef0ff18eb4f1" host="172-239-60-160" Nov 5 15:48:27.457612 containerd[1603]: 2025-11-05 15:48:27.405 [INFO][4548] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-60-160" Nov 5 15:48:27.457612 containerd[1603]: 2025-11-05 15:48:27.412 [INFO][4548] ipam/ipam.go 511: Trying affinity for 192.168.50.0/26 host="172-239-60-160" Nov 5 15:48:27.457612 containerd[1603]: 2025-11-05 15:48:27.414 [INFO][4548] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.0/26 host="172-239-60-160" Nov 5 15:48:27.457612 containerd[1603]: 2025-11-05 15:48:27.416 [INFO][4548] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="172-239-60-160" Nov 5 15:48:27.457612 containerd[1603]: 2025-11-05 15:48:27.416 [INFO][4548] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.6cd0602daaaf0e131647b4ef85ff16c32914ab4237de1e1dd431ef0ff18eb4f1" host="172-239-60-160" Nov 5 15:48:27.457612 containerd[1603]: 2025-11-05 15:48:27.419 [INFO][4548] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6cd0602daaaf0e131647b4ef85ff16c32914ab4237de1e1dd431ef0ff18eb4f1 Nov 5 15:48:27.457612 containerd[1603]: 2025-11-05 15:48:27.424 [INFO][4548] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.6cd0602daaaf0e131647b4ef85ff16c32914ab4237de1e1dd431ef0ff18eb4f1" host="172-239-60-160" Nov 5 15:48:27.457612 containerd[1603]: 2025-11-05 15:48:27.431 [INFO][4548] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.6/26] block=192.168.50.0/26 handle="k8s-pod-network.6cd0602daaaf0e131647b4ef85ff16c32914ab4237de1e1dd431ef0ff18eb4f1" host="172-239-60-160" Nov 5 15:48:27.457612 containerd[1603]: 2025-11-05 15:48:27.431 [INFO][4548] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.6/26] handle="k8s-pod-network.6cd0602daaaf0e131647b4ef85ff16c32914ab4237de1e1dd431ef0ff18eb4f1" host="172-239-60-160" Nov 5 15:48:27.457612 containerd[1603]: 2025-11-05 15:48:27.431 [INFO][4548] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:48:27.457612 containerd[1603]: 2025-11-05 15:48:27.431 [INFO][4548] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.6/26] IPv6=[] ContainerID="6cd0602daaaf0e131647b4ef85ff16c32914ab4237de1e1dd431ef0ff18eb4f1" HandleID="k8s-pod-network.6cd0602daaaf0e131647b4ef85ff16c32914ab4237de1e1dd431ef0ff18eb4f1" Workload="172--239--60--160-k8s-coredns--668d6bf9bc--77dvp-eth0" Nov 5 15:48:27.458446 containerd[1603]: 2025-11-05 15:48:27.434 [INFO][4535] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6cd0602daaaf0e131647b4ef85ff16c32914ab4237de1e1dd431ef0ff18eb4f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-77dvp" WorkloadEndpoint="172--239--60--160-k8s-coredns--668d6bf9bc--77dvp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--60--160-k8s-coredns--668d6bf9bc--77dvp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"765f74a1-9298-4f11-a1c1-9dfc8dc0f7ac", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 47, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-60-160", ContainerID:"", Pod:"coredns-668d6bf9bc-77dvp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9777b29020b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:48:27.458446 containerd[1603]: 2025-11-05 15:48:27.434 [INFO][4535] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.6/32] ContainerID="6cd0602daaaf0e131647b4ef85ff16c32914ab4237de1e1dd431ef0ff18eb4f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-77dvp" WorkloadEndpoint="172--239--60--160-k8s-coredns--668d6bf9bc--77dvp-eth0" Nov 5 15:48:27.458446 containerd[1603]: 2025-11-05 15:48:27.434 [INFO][4535] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9777b29020b ContainerID="6cd0602daaaf0e131647b4ef85ff16c32914ab4237de1e1dd431ef0ff18eb4f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-77dvp" WorkloadEndpoint="172--239--60--160-k8s-coredns--668d6bf9bc--77dvp-eth0" Nov 5 15:48:27.458446 containerd[1603]: 2025-11-05 15:48:27.441 [INFO][4535] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6cd0602daaaf0e131647b4ef85ff16c32914ab4237de1e1dd431ef0ff18eb4f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-77dvp" WorkloadEndpoint="172--239--60--160-k8s-coredns--668d6bf9bc--77dvp-eth0" Nov 5 15:48:27.458446 containerd[1603]: 2025-11-05 15:48:27.444 [INFO][4535] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6cd0602daaaf0e131647b4ef85ff16c32914ab4237de1e1dd431ef0ff18eb4f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-77dvp" WorkloadEndpoint="172--239--60--160-k8s-coredns--668d6bf9bc--77dvp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--60--160-k8s-coredns--668d6bf9bc--77dvp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"765f74a1-9298-4f11-a1c1-9dfc8dc0f7ac", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 47, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-60-160", ContainerID:"6cd0602daaaf0e131647b4ef85ff16c32914ab4237de1e1dd431ef0ff18eb4f1", Pod:"coredns-668d6bf9bc-77dvp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9777b29020b", MAC:"a2:b5:8a:a3:70:69", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:48:27.458446 containerd[1603]: 2025-11-05 15:48:27.453 [INFO][4535] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6cd0602daaaf0e131647b4ef85ff16c32914ab4237de1e1dd431ef0ff18eb4f1" Namespace="kube-system" Pod="coredns-668d6bf9bc-77dvp" WorkloadEndpoint="172--239--60--160-k8s-coredns--668d6bf9bc--77dvp-eth0" Nov 5 15:48:27.487791 kubelet[2780]: E1105 15:48:27.487442 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:48:27.497978 kubelet[2780]: E1105 15:48:27.497095 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-px2b8" podUID="d8afeb1c-714d-4335-9a1d-a1135daaa2b3" Nov 5 15:48:27.516407 kubelet[2780]: E1105 15:48:27.516384 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwjt" podUID="3b4571ca-2142-4ad0-85e2-e8b00b2fb524" Nov 5 15:48:27.518750 kubelet[2780]: E1105 15:48:27.518322 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fc44484bc-vfb78" podUID="e90854ce-cf6e-4a39-9e2a-1f06e654f065" Nov 5 15:48:27.528256 containerd[1603]: time="2025-11-05T15:48:27.528198381Z" level=info msg="connecting to shim 6cd0602daaaf0e131647b4ef85ff16c32914ab4237de1e1dd431ef0ff18eb4f1" address="unix:///run/containerd/s/cdba57ec4d9cba5583fc09d51e8d0ffb9b9b29a8c02fdd4101ffdb49591b0327" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:48:27.548643 kubelet[2780]: I1105 15:48:27.548347 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zgrpw" podStartSLOduration=34.548335671 podStartE2EDuration="34.548335671s" podCreationTimestamp="2025-11-05 15:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:48:27.525057754 +0000 UTC m=+40.304292636" watchObservedRunningTime="2025-11-05 15:48:27.548335671 +0000 UTC m=+40.327570553" Nov 5 15:48:27.564246 systemd-networkd[1515]: cali1d16b81dc95: Gained IPv6LL Nov 5 15:48:27.613848 systemd[1]: Started cri-containerd-6cd0602daaaf0e131647b4ef85ff16c32914ab4237de1e1dd431ef0ff18eb4f1.scope - libcontainer container 6cd0602daaaf0e131647b4ef85ff16c32914ab4237de1e1dd431ef0ff18eb4f1. Nov 5 15:48:27.626849 systemd-networkd[1515]: cali6c748460735: Gained IPv6LL Nov 5 15:48:27.704951 containerd[1603]: time="2025-11-05T15:48:27.704893324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-77dvp,Uid:765f74a1-9298-4f11-a1c1-9dfc8dc0f7ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"6cd0602daaaf0e131647b4ef85ff16c32914ab4237de1e1dd431ef0ff18eb4f1\"" Nov 5 15:48:27.707049 kubelet[2780]: E1105 15:48:27.707003 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:48:27.714471 containerd[1603]: time="2025-11-05T15:48:27.714434735Z" level=info msg="CreateContainer within sandbox \"6cd0602daaaf0e131647b4ef85ff16c32914ab4237de1e1dd431ef0ff18eb4f1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 15:48:27.735755 containerd[1603]: time="2025-11-05T15:48:27.734134025Z" level=info msg="Container 22ae88aba1525970f5bcfc3bf90a600a10fa7ae892f09ff244d69de301ff7664: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:48:27.740068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3983672345.mount: Deactivated successfully. Nov 5 15:48:27.748349 containerd[1603]: time="2025-11-05T15:48:27.748295701Z" level=info msg="CreateContainer within sandbox \"6cd0602daaaf0e131647b4ef85ff16c32914ab4237de1e1dd431ef0ff18eb4f1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"22ae88aba1525970f5bcfc3bf90a600a10fa7ae892f09ff244d69de301ff7664\"" Nov 5 15:48:27.748826 containerd[1603]: time="2025-11-05T15:48:27.748793160Z" level=info msg="StartContainer for \"22ae88aba1525970f5bcfc3bf90a600a10fa7ae892f09ff244d69de301ff7664\"" Nov 5 15:48:27.749638 containerd[1603]: time="2025-11-05T15:48:27.749599589Z" level=info msg="connecting to shim 22ae88aba1525970f5bcfc3bf90a600a10fa7ae892f09ff244d69de301ff7664" address="unix:///run/containerd/s/cdba57ec4d9cba5583fc09d51e8d0ffb9b9b29a8c02fdd4101ffdb49591b0327" protocol=ttrpc version=3 Nov 5 15:48:27.784079 systemd[1]: Started cri-containerd-22ae88aba1525970f5bcfc3bf90a600a10fa7ae892f09ff244d69de301ff7664.scope - libcontainer container 22ae88aba1525970f5bcfc3bf90a600a10fa7ae892f09ff244d69de301ff7664. Nov 5 15:48:27.827139 containerd[1603]: time="2025-11-05T15:48:27.826789812Z" level=info msg="StartContainer for \"22ae88aba1525970f5bcfc3bf90a600a10fa7ae892f09ff244d69de301ff7664\" returns successfully" Nov 5 15:48:27.947100 systemd-networkd[1515]: cali810b92a1dd2: Gained IPv6LL Nov 5 15:48:28.312773 containerd[1603]: time="2025-11-05T15:48:28.312665536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6644f9d4c6-74nvz,Uid:e61d1196-bf4a-4bdd-877c-9ea9a871d23c,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:48:28.436793 systemd-networkd[1515]: calic39603260c4: Link UP Nov 5 15:48:28.437373 systemd-networkd[1515]: calic39603260c4: Gained carrier Nov 5 15:48:28.455633 containerd[1603]: 2025-11-05 15:48:28.368 [INFO][4646] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--60--160-k8s-calico--apiserver--6644f9d4c6--74nvz-eth0 calico-apiserver-6644f9d4c6- calico-apiserver e61d1196-bf4a-4bdd-877c-9ea9a871d23c 804 0 2025-11-05 15:48:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6644f9d4c6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-239-60-160 calico-apiserver-6644f9d4c6-74nvz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic39603260c4 [] [] }} ContainerID="85399fb328501aa330b941d5a54497648ca2623b67b27461a3ca822aea38ced8" Namespace="calico-apiserver" Pod="calico-apiserver-6644f9d4c6-74nvz" WorkloadEndpoint="172--239--60--160-k8s-calico--apiserver--6644f9d4c6--74nvz-" Nov 5 15:48:28.455633 containerd[1603]: 2025-11-05 15:48:28.368 [INFO][4646] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="85399fb328501aa330b941d5a54497648ca2623b67b27461a3ca822aea38ced8" Namespace="calico-apiserver" Pod="calico-apiserver-6644f9d4c6-74nvz" WorkloadEndpoint="172--239--60--160-k8s-calico--apiserver--6644f9d4c6--74nvz-eth0" Nov 5 15:48:28.455633 containerd[1603]: 2025-11-05 15:48:28.397 [INFO][4658] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="85399fb328501aa330b941d5a54497648ca2623b67b27461a3ca822aea38ced8" HandleID="k8s-pod-network.85399fb328501aa330b941d5a54497648ca2623b67b27461a3ca822aea38ced8" Workload="172--239--60--160-k8s-calico--apiserver--6644f9d4c6--74nvz-eth0" Nov 5 15:48:28.455633 containerd[1603]: 2025-11-05 15:48:28.397 [INFO][4658] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="85399fb328501aa330b941d5a54497648ca2623b67b27461a3ca822aea38ced8" HandleID="k8s-pod-network.85399fb328501aa330b941d5a54497648ca2623b67b27461a3ca822aea38ced8" Workload="172--239--60--160-k8s-calico--apiserver--6644f9d4c6--74nvz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f070), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-239-60-160", "pod":"calico-apiserver-6644f9d4c6-74nvz", "timestamp":"2025-11-05 15:48:28.397356302 +0000 UTC"}, Hostname:"172-239-60-160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:48:28.455633 containerd[1603]: 2025-11-05 15:48:28.397 [INFO][4658] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:48:28.455633 containerd[1603]: 2025-11-05 15:48:28.397 [INFO][4658] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:48:28.455633 containerd[1603]: 2025-11-05 15:48:28.397 [INFO][4658] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-60-160' Nov 5 15:48:28.455633 containerd[1603]: 2025-11-05 15:48:28.403 [INFO][4658] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.85399fb328501aa330b941d5a54497648ca2623b67b27461a3ca822aea38ced8" host="172-239-60-160" Nov 5 15:48:28.455633 containerd[1603]: 2025-11-05 15:48:28.407 [INFO][4658] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-60-160" Nov 5 15:48:28.455633 containerd[1603]: 2025-11-05 15:48:28.413 [INFO][4658] ipam/ipam.go 511: Trying affinity for 192.168.50.0/26 host="172-239-60-160" Nov 5 15:48:28.455633 containerd[1603]: 2025-11-05 15:48:28.415 [INFO][4658] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.0/26 host="172-239-60-160" Nov 5 15:48:28.455633 containerd[1603]: 2025-11-05 15:48:28.417 [INFO][4658] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="172-239-60-160" Nov 5 15:48:28.455633 containerd[1603]: 2025-11-05 15:48:28.417 [INFO][4658] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.85399fb328501aa330b941d5a54497648ca2623b67b27461a3ca822aea38ced8" host="172-239-60-160" Nov 5 15:48:28.455633 containerd[1603]: 2025-11-05 15:48:28.419 [INFO][4658] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.85399fb328501aa330b941d5a54497648ca2623b67b27461a3ca822aea38ced8 Nov 5 15:48:28.455633 containerd[1603]: 2025-11-05 15:48:28.423 [INFO][4658] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.85399fb328501aa330b941d5a54497648ca2623b67b27461a3ca822aea38ced8" host="172-239-60-160" Nov 5 15:48:28.455633 containerd[1603]: 2025-11-05 15:48:28.429 [INFO][4658] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.7/26] block=192.168.50.0/26 handle="k8s-pod-network.85399fb328501aa330b941d5a54497648ca2623b67b27461a3ca822aea38ced8" host="172-239-60-160" Nov 5 15:48:28.455633 containerd[1603]: 2025-11-05 15:48:28.429 [INFO][4658] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.7/26] handle="k8s-pod-network.85399fb328501aa330b941d5a54497648ca2623b67b27461a3ca822aea38ced8" host="172-239-60-160" Nov 5 15:48:28.455633 containerd[1603]: 2025-11-05 15:48:28.429 [INFO][4658] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:48:28.455633 containerd[1603]: 2025-11-05 15:48:28.429 [INFO][4658] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.7/26] IPv6=[] ContainerID="85399fb328501aa330b941d5a54497648ca2623b67b27461a3ca822aea38ced8" HandleID="k8s-pod-network.85399fb328501aa330b941d5a54497648ca2623b67b27461a3ca822aea38ced8" Workload="172--239--60--160-k8s-calico--apiserver--6644f9d4c6--74nvz-eth0" Nov 5 15:48:28.456667 containerd[1603]: 2025-11-05 15:48:28.433 [INFO][4646] cni-plugin/k8s.go 418: Populated endpoint ContainerID="85399fb328501aa330b941d5a54497648ca2623b67b27461a3ca822aea38ced8" Namespace="calico-apiserver" Pod="calico-apiserver-6644f9d4c6-74nvz" WorkloadEndpoint="172--239--60--160-k8s-calico--apiserver--6644f9d4c6--74nvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--60--160-k8s-calico--apiserver--6644f9d4c6--74nvz-eth0", GenerateName:"calico-apiserver-6644f9d4c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"e61d1196-bf4a-4bdd-877c-9ea9a871d23c", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 48, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6644f9d4c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-60-160", ContainerID:"", Pod:"calico-apiserver-6644f9d4c6-74nvz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic39603260c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:48:28.456667 containerd[1603]: 2025-11-05 15:48:28.433 [INFO][4646] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.7/32] ContainerID="85399fb328501aa330b941d5a54497648ca2623b67b27461a3ca822aea38ced8" Namespace="calico-apiserver" Pod="calico-apiserver-6644f9d4c6-74nvz" WorkloadEndpoint="172--239--60--160-k8s-calico--apiserver--6644f9d4c6--74nvz-eth0" Nov 5 15:48:28.456667 containerd[1603]: 2025-11-05 15:48:28.433 [INFO][4646] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic39603260c4 ContainerID="85399fb328501aa330b941d5a54497648ca2623b67b27461a3ca822aea38ced8" Namespace="calico-apiserver" Pod="calico-apiserver-6644f9d4c6-74nvz" WorkloadEndpoint="172--239--60--160-k8s-calico--apiserver--6644f9d4c6--74nvz-eth0" Nov 5 15:48:28.456667 containerd[1603]: 2025-11-05 15:48:28.437 [INFO][4646] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="85399fb328501aa330b941d5a54497648ca2623b67b27461a3ca822aea38ced8" Namespace="calico-apiserver" Pod="calico-apiserver-6644f9d4c6-74nvz" WorkloadEndpoint="172--239--60--160-k8s-calico--apiserver--6644f9d4c6--74nvz-eth0" Nov 5 15:48:28.456667 containerd[1603]: 2025-11-05 15:48:28.438 [INFO][4646] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="85399fb328501aa330b941d5a54497648ca2623b67b27461a3ca822aea38ced8" Namespace="calico-apiserver" Pod="calico-apiserver-6644f9d4c6-74nvz" WorkloadEndpoint="172--239--60--160-k8s-calico--apiserver--6644f9d4c6--74nvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--60--160-k8s-calico--apiserver--6644f9d4c6--74nvz-eth0", GenerateName:"calico-apiserver-6644f9d4c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"e61d1196-bf4a-4bdd-877c-9ea9a871d23c", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 48, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6644f9d4c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-60-160", ContainerID:"85399fb328501aa330b941d5a54497648ca2623b67b27461a3ca822aea38ced8", Pod:"calico-apiserver-6644f9d4c6-74nvz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic39603260c4", MAC:"be:01:22:f7:cd:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:48:28.456667 containerd[1603]: 2025-11-05 15:48:28.450 [INFO][4646] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="85399fb328501aa330b941d5a54497648ca2623b67b27461a3ca822aea38ced8" Namespace="calico-apiserver" Pod="calico-apiserver-6644f9d4c6-74nvz" WorkloadEndpoint="172--239--60--160-k8s-calico--apiserver--6644f9d4c6--74nvz-eth0" Nov 5 15:48:28.480673 containerd[1603]: time="2025-11-05T15:48:28.480301279Z" level=info msg="connecting to shim 85399fb328501aa330b941d5a54497648ca2623b67b27461a3ca822aea38ced8" address="unix:///run/containerd/s/1187ae547ad53ea312eacf6dbd815e4b2389808df46c838fea3ee8f844bf726d" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:48:28.495710 kubelet[2780]: E1105 15:48:28.495676 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:48:28.496602 kubelet[2780]: E1105 15:48:28.496439 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:48:28.515820 kubelet[2780]: I1105 15:48:28.515518 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-77dvp" podStartSLOduration=35.515503754 podStartE2EDuration="35.515503754s" podCreationTimestamp="2025-11-05 15:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:48:28.514076325 +0000 UTC m=+41.293311207" watchObservedRunningTime="2025-11-05 15:48:28.515503754 +0000 UTC m=+41.294738636" Nov 5 15:48:28.528998 systemd[1]: Started cri-containerd-85399fb328501aa330b941d5a54497648ca2623b67b27461a3ca822aea38ced8.scope - libcontainer container 85399fb328501aa330b941d5a54497648ca2623b67b27461a3ca822aea38ced8. Nov 5 15:48:28.612066 containerd[1603]: time="2025-11-05T15:48:28.611964847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6644f9d4c6-74nvz,Uid:e61d1196-bf4a-4bdd-877c-9ea9a871d23c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"85399fb328501aa330b941d5a54497648ca2623b67b27461a3ca822aea38ced8\"" Nov 5 15:48:28.613812 containerd[1603]: time="2025-11-05T15:48:28.613528965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:48:28.758192 containerd[1603]: time="2025-11-05T15:48:28.758111341Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:48:28.759060 containerd[1603]: time="2025-11-05T15:48:28.759002400Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:48:28.759131 containerd[1603]: time="2025-11-05T15:48:28.759095240Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:48:28.760306 kubelet[2780]: E1105 15:48:28.759428 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:48:28.760306 kubelet[2780]: E1105 15:48:28.759901 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:48:28.762025 kubelet[2780]: E1105 15:48:28.760223 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2cssb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6644f9d4c6-74nvz_calico-apiserver(e61d1196-bf4a-4bdd-877c-9ea9a871d23c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:48:28.763193 kubelet[2780]: E1105 15:48:28.763076 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-74nvz" podUID="e61d1196-bf4a-4bdd-877c-9ea9a871d23c" Nov 5 15:48:28.778914 systemd-networkd[1515]: cali9777b29020b: Gained IPv6LL Nov 5 15:48:29.312921 containerd[1603]: time="2025-11-05T15:48:29.312622636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6slgh,Uid:175c15d8-2ca8-4a9b-b355-438a1e3fa9fd,Namespace:calico-system,Attempt:0,}" Nov 5 15:48:29.411764 systemd-networkd[1515]: calie691b73f412: Link UP Nov 5 15:48:29.412556 systemd-networkd[1515]: calie691b73f412: Gained carrier Nov 5 15:48:29.431188 containerd[1603]: 2025-11-05 15:48:29.350 [INFO][4721] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--60--160-k8s-csi--node--driver--6slgh-eth0 csi-node-driver- calico-system 175c15d8-2ca8-4a9b-b355-438a1e3fa9fd 704 0 2025-11-05 15:48:08 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-239-60-160 csi-node-driver-6slgh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie691b73f412 [] [] }} ContainerID="a507f8c562e01251d7aa60109bc0dc7a46e821588045798636fc839a766ddf8c" Namespace="calico-system" Pod="csi-node-driver-6slgh" WorkloadEndpoint="172--239--60--160-k8s-csi--node--driver--6slgh-" Nov 5 15:48:29.431188 containerd[1603]: 2025-11-05 15:48:29.350 [INFO][4721] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a507f8c562e01251d7aa60109bc0dc7a46e821588045798636fc839a766ddf8c" Namespace="calico-system" Pod="csi-node-driver-6slgh" WorkloadEndpoint="172--239--60--160-k8s-csi--node--driver--6slgh-eth0" Nov 5 15:48:29.431188 containerd[1603]: 2025-11-05 15:48:29.377 [INFO][4733] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a507f8c562e01251d7aa60109bc0dc7a46e821588045798636fc839a766ddf8c" HandleID="k8s-pod-network.a507f8c562e01251d7aa60109bc0dc7a46e821588045798636fc839a766ddf8c" Workload="172--239--60--160-k8s-csi--node--driver--6slgh-eth0" Nov 5 15:48:29.431188 containerd[1603]: 2025-11-05 15:48:29.377 [INFO][4733] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a507f8c562e01251d7aa60109bc0dc7a46e821588045798636fc839a766ddf8c" HandleID="k8s-pod-network.a507f8c562e01251d7aa60109bc0dc7a46e821588045798636fc839a766ddf8c" Workload="172--239--60--160-k8s-csi--node--driver--6slgh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f220), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-60-160", "pod":"csi-node-driver-6slgh", "timestamp":"2025-11-05 15:48:29.377384032 +0000 UTC"}, Hostname:"172-239-60-160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:48:29.431188 containerd[1603]: 2025-11-05 15:48:29.377 [INFO][4733] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:48:29.431188 containerd[1603]: 2025-11-05 15:48:29.377 [INFO][4733] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:48:29.431188 containerd[1603]: 2025-11-05 15:48:29.377 [INFO][4733] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-60-160' Nov 5 15:48:29.431188 containerd[1603]: 2025-11-05 15:48:29.383 [INFO][4733] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a507f8c562e01251d7aa60109bc0dc7a46e821588045798636fc839a766ddf8c" host="172-239-60-160" Nov 5 15:48:29.431188 containerd[1603]: 2025-11-05 15:48:29.388 [INFO][4733] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-60-160" Nov 5 15:48:29.431188 containerd[1603]: 2025-11-05 15:48:29.391 [INFO][4733] ipam/ipam.go 511: Trying affinity for 192.168.50.0/26 host="172-239-60-160" Nov 5 15:48:29.431188 containerd[1603]: 2025-11-05 15:48:29.392 [INFO][4733] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.0/26 host="172-239-60-160" Nov 5 15:48:29.431188 containerd[1603]: 2025-11-05 15:48:29.394 [INFO][4733] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="172-239-60-160" Nov 5 15:48:29.431188 containerd[1603]: 2025-11-05 15:48:29.394 [INFO][4733] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.a507f8c562e01251d7aa60109bc0dc7a46e821588045798636fc839a766ddf8c" host="172-239-60-160" Nov 5 15:48:29.431188 containerd[1603]: 2025-11-05 15:48:29.395 [INFO][4733] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a507f8c562e01251d7aa60109bc0dc7a46e821588045798636fc839a766ddf8c Nov 5 15:48:29.431188 containerd[1603]: 2025-11-05 15:48:29.399 [INFO][4733] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.a507f8c562e01251d7aa60109bc0dc7a46e821588045798636fc839a766ddf8c" host="172-239-60-160" Nov 5 15:48:29.431188 containerd[1603]: 2025-11-05 15:48:29.404 [INFO][4733] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.8/26] block=192.168.50.0/26 handle="k8s-pod-network.a507f8c562e01251d7aa60109bc0dc7a46e821588045798636fc839a766ddf8c" host="172-239-60-160" Nov 5 15:48:29.431188 containerd[1603]: 2025-11-05 15:48:29.404 [INFO][4733] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.8/26] handle="k8s-pod-network.a507f8c562e01251d7aa60109bc0dc7a46e821588045798636fc839a766ddf8c" host="172-239-60-160" Nov 5 15:48:29.431188 containerd[1603]: 2025-11-05 15:48:29.404 [INFO][4733] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:48:29.431188 containerd[1603]: 2025-11-05 15:48:29.404 [INFO][4733] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.8/26] IPv6=[] ContainerID="a507f8c562e01251d7aa60109bc0dc7a46e821588045798636fc839a766ddf8c" HandleID="k8s-pod-network.a507f8c562e01251d7aa60109bc0dc7a46e821588045798636fc839a766ddf8c" Workload="172--239--60--160-k8s-csi--node--driver--6slgh-eth0" Nov 5 15:48:29.431685 containerd[1603]: 2025-11-05 15:48:29.408 [INFO][4721] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a507f8c562e01251d7aa60109bc0dc7a46e821588045798636fc839a766ddf8c" Namespace="calico-system" Pod="csi-node-driver-6slgh" WorkloadEndpoint="172--239--60--160-k8s-csi--node--driver--6slgh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--60--160-k8s-csi--node--driver--6slgh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"175c15d8-2ca8-4a9b-b355-438a1e3fa9fd", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 48, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-60-160", ContainerID:"", Pod:"csi-node-driver-6slgh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie691b73f412", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:48:29.431685 containerd[1603]: 2025-11-05 15:48:29.408 [INFO][4721] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.8/32] ContainerID="a507f8c562e01251d7aa60109bc0dc7a46e821588045798636fc839a766ddf8c" Namespace="calico-system" Pod="csi-node-driver-6slgh" WorkloadEndpoint="172--239--60--160-k8s-csi--node--driver--6slgh-eth0" Nov 5 15:48:29.431685 containerd[1603]: 2025-11-05 15:48:29.408 [INFO][4721] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie691b73f412 ContainerID="a507f8c562e01251d7aa60109bc0dc7a46e821588045798636fc839a766ddf8c" Namespace="calico-system" Pod="csi-node-driver-6slgh" WorkloadEndpoint="172--239--60--160-k8s-csi--node--driver--6slgh-eth0" Nov 5 15:48:29.431685 containerd[1603]: 2025-11-05 15:48:29.413 [INFO][4721] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a507f8c562e01251d7aa60109bc0dc7a46e821588045798636fc839a766ddf8c" Namespace="calico-system" Pod="csi-node-driver-6slgh" WorkloadEndpoint="172--239--60--160-k8s-csi--node--driver--6slgh-eth0" Nov 5 15:48:29.431685 containerd[1603]: 2025-11-05 15:48:29.413 [INFO][4721] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a507f8c562e01251d7aa60109bc0dc7a46e821588045798636fc839a766ddf8c" Namespace="calico-system" Pod="csi-node-driver-6slgh" WorkloadEndpoint="172--239--60--160-k8s-csi--node--driver--6slgh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--60--160-k8s-csi--node--driver--6slgh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"175c15d8-2ca8-4a9b-b355-438a1e3fa9fd", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 48, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-60-160", ContainerID:"a507f8c562e01251d7aa60109bc0dc7a46e821588045798636fc839a766ddf8c", Pod:"csi-node-driver-6slgh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie691b73f412", MAC:"62:d4:86:15:f2:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:48:29.431685 containerd[1603]: 2025-11-05 15:48:29.422 [INFO][4721] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a507f8c562e01251d7aa60109bc0dc7a46e821588045798636fc839a766ddf8c" Namespace="calico-system" Pod="csi-node-driver-6slgh" WorkloadEndpoint="172--239--60--160-k8s-csi--node--driver--6slgh-eth0" Nov 5 15:48:29.465149 containerd[1603]: time="2025-11-05T15:48:29.465106374Z" level=info msg="connecting to shim a507f8c562e01251d7aa60109bc0dc7a46e821588045798636fc839a766ddf8c" address="unix:///run/containerd/s/f28827c5303593ce9da83ee634b793bce13630a6a2f8d40591f66bf6493bde8d" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:48:29.495897 systemd[1]: Started cri-containerd-a507f8c562e01251d7aa60109bc0dc7a46e821588045798636fc839a766ddf8c.scope - libcontainer container a507f8c562e01251d7aa60109bc0dc7a46e821588045798636fc839a766ddf8c. Nov 5 15:48:29.498971 kubelet[2780]: E1105 15:48:29.498943 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:48:29.500884 kubelet[2780]: E1105 15:48:29.499198 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:48:29.500884 kubelet[2780]: E1105 15:48:29.500429 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-74nvz" podUID="e61d1196-bf4a-4bdd-877c-9ea9a871d23c" Nov 5 15:48:29.548257 systemd-networkd[1515]: calic39603260c4: Gained IPv6LL Nov 5 15:48:29.558962 containerd[1603]: time="2025-11-05T15:48:29.558056651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6slgh,Uid:175c15d8-2ca8-4a9b-b355-438a1e3fa9fd,Namespace:calico-system,Attempt:0,} returns sandbox id \"a507f8c562e01251d7aa60109bc0dc7a46e821588045798636fc839a766ddf8c\"" Nov 5 15:48:29.562289 containerd[1603]: time="2025-11-05T15:48:29.562259847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:48:29.718950 containerd[1603]: time="2025-11-05T15:48:29.718682760Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:48:29.720125 containerd[1603]: time="2025-11-05T15:48:29.720047729Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:48:29.720243 kubelet[2780]: E1105 15:48:29.720192 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:48:29.720243 kubelet[2780]: E1105 15:48:29.720238 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:48:29.720449 kubelet[2780]: E1105 15:48:29.720346 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k87mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6slgh_calico-system(175c15d8-2ca8-4a9b-b355-438a1e3fa9fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:48:29.720558 containerd[1603]: time="2025-11-05T15:48:29.720305279Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:48:29.723155 containerd[1603]: time="2025-11-05T15:48:29.723066026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:48:29.901384 containerd[1603]: time="2025-11-05T15:48:29.901308998Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:48:29.902335 containerd[1603]: time="2025-11-05T15:48:29.902272477Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:48:29.903065 containerd[1603]: time="2025-11-05T15:48:29.902441637Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:48:29.903128 kubelet[2780]: E1105 15:48:29.902704 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:48:29.903128 kubelet[2780]: E1105 15:48:29.902803 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:48:29.903128 kubelet[2780]: E1105 15:48:29.902995 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k87mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6slgh_calico-system(175c15d8-2ca8-4a9b-b355-438a1e3fa9fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:48:29.904750 kubelet[2780]: E1105 15:48:29.904582 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6slgh" podUID="175c15d8-2ca8-4a9b-b355-438a1e3fa9fd" Nov 5 15:48:30.501838 kubelet[2780]: E1105 15:48:30.501801 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:48:30.503551 kubelet[2780]: E1105 15:48:30.502922 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6slgh" podUID="175c15d8-2ca8-4a9b-b355-438a1e3fa9fd" Nov 5 15:48:30.504027 kubelet[2780]: E1105 15:48:30.504001 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-74nvz" podUID="e61d1196-bf4a-4bdd-877c-9ea9a871d23c" Nov 5 15:48:30.570918 systemd-networkd[1515]: calie691b73f412: Gained IPv6LL Nov 5 15:48:31.504553 kubelet[2780]: E1105 15:48:31.504432 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6slgh" podUID="175c15d8-2ca8-4a9b-b355-438a1e3fa9fd" Nov 5 15:48:36.313018 containerd[1603]: time="2025-11-05T15:48:36.312912155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:48:36.469641 containerd[1603]: time="2025-11-05T15:48:36.469589038Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:48:36.470640 containerd[1603]: time="2025-11-05T15:48:36.470571279Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:48:36.470887 containerd[1603]: time="2025-11-05T15:48:36.470587357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:48:36.471232 kubelet[2780]: E1105 15:48:36.471157 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:48:36.471924 kubelet[2780]: E1105 15:48:36.471204 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:48:36.472276 kubelet[2780]: E1105 15:48:36.471992 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3624bb609f634714aab3714467b41e19,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-knk57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d78f875fb-jwnjz_calico-system(1ae08b84-bd20-47be-a4e3-39130515cbd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:48:36.474918 containerd[1603]: time="2025-11-05T15:48:36.474800177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:48:36.611885 containerd[1603]: time="2025-11-05T15:48:36.611582941Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:48:36.613004 containerd[1603]: time="2025-11-05T15:48:36.612941595Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:48:36.613049 containerd[1603]: time="2025-11-05T15:48:36.613011215Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:48:36.613146 kubelet[2780]: E1105 15:48:36.613120 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:48:36.613224 kubelet[2780]: E1105 15:48:36.613157 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:48:36.613295 kubelet[2780]: E1105 15:48:36.613249 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-knk57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d78f875fb-jwnjz_calico-system(1ae08b84-bd20-47be-a4e3-39130515cbd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:48:36.614714 kubelet[2780]: E1105 15:48:36.614673 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d78f875fb-jwnjz" podUID="1ae08b84-bd20-47be-a4e3-39130515cbd3" Nov 5 15:48:38.312765 containerd[1603]: time="2025-11-05T15:48:38.312676221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:48:38.449286 containerd[1603]: time="2025-11-05T15:48:38.449229526Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:48:38.450318 containerd[1603]: time="2025-11-05T15:48:38.450252489Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:48:38.450369 containerd[1603]: time="2025-11-05T15:48:38.450329889Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:48:38.450471 kubelet[2780]: E1105 15:48:38.450434 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:48:38.451108 kubelet[2780]: E1105 15:48:38.450476 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:48:38.451108 kubelet[2780]: E1105 15:48:38.450579 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ktptk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qtwjt_calico-system(3b4571ca-2142-4ad0-85e2-e8b00b2fb524): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:48:38.451932 kubelet[2780]: E1105 15:48:38.451853 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwjt" podUID="3b4571ca-2142-4ad0-85e2-e8b00b2fb524" Nov 5 15:48:39.313765 containerd[1603]: time="2025-11-05T15:48:39.313399074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:48:39.503738 containerd[1603]: time="2025-11-05T15:48:39.503661110Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:48:39.505586 containerd[1603]: time="2025-11-05T15:48:39.505446226Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:48:39.505832 containerd[1603]: time="2025-11-05T15:48:39.505798542Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:48:39.506131 kubelet[2780]: E1105 15:48:39.506089 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:48:39.507207 kubelet[2780]: E1105 15:48:39.506328 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:48:39.507207 kubelet[2780]: E1105 15:48:39.506442 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtp75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5fc44484bc-vfb78_calico-system(e90854ce-cf6e-4a39-9e2a-1f06e654f065): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:48:39.508094 kubelet[2780]: E1105 15:48:39.508034 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fc44484bc-vfb78" podUID="e90854ce-cf6e-4a39-9e2a-1f06e654f065" Nov 5 15:48:42.313897 containerd[1603]: time="2025-11-05T15:48:42.313854531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:48:42.462527 containerd[1603]: time="2025-11-05T15:48:42.462453730Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:48:42.463399 containerd[1603]: time="2025-11-05T15:48:42.463353277Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:48:42.463566 containerd[1603]: time="2025-11-05T15:48:42.463433539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:48:42.463601 kubelet[2780]: E1105 15:48:42.463533 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:48:42.463601 kubelet[2780]: E1105 15:48:42.463569 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:48:42.464225 kubelet[2780]: E1105 15:48:42.463682 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nrvlm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6644f9d4c6-px2b8_calico-apiserver(d8afeb1c-714d-4335-9a1d-a1135daaa2b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:48:42.465763 kubelet[2780]: E1105 15:48:42.465270 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-px2b8" podUID="d8afeb1c-714d-4335-9a1d-a1135daaa2b3" Nov 5 15:48:44.314107 containerd[1603]: time="2025-11-05T15:48:44.313968751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:48:44.453648 containerd[1603]: time="2025-11-05T15:48:44.453576111Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:48:44.454744 containerd[1603]: time="2025-11-05T15:48:44.454390018Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:48:44.454744 containerd[1603]: time="2025-11-05T15:48:44.454539104Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:48:44.454846 kubelet[2780]: E1105 15:48:44.454757 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:48:44.454846 kubelet[2780]: E1105 15:48:44.454796 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:48:44.455971 kubelet[2780]: E1105 15:48:44.454944 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k87mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6slgh_calico-system(175c15d8-2ca8-4a9b-b355-438a1e3fa9fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:48:44.457954 containerd[1603]: time="2025-11-05T15:48:44.457924496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:48:44.693810 containerd[1603]: time="2025-11-05T15:48:44.693182881Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:48:44.694756 containerd[1603]: time="2025-11-05T15:48:44.694638578Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:48:44.694921 containerd[1603]: time="2025-11-05T15:48:44.694772606Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:48:44.694952 kubelet[2780]: E1105 15:48:44.694860 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:48:44.694952 kubelet[2780]: E1105 15:48:44.694888 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:48:44.695956 kubelet[2780]: E1105 15:48:44.694963 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k87mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6slgh_calico-system(175c15d8-2ca8-4a9b-b355-438a1e3fa9fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:48:44.696418 kubelet[2780]: E1105 15:48:44.696284 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6slgh" podUID="175c15d8-2ca8-4a9b-b355-438a1e3fa9fd" Nov 5 15:48:45.314823 containerd[1603]: time="2025-11-05T15:48:45.314550723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:48:45.443655 containerd[1603]: time="2025-11-05T15:48:45.443609471Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:48:45.444993 containerd[1603]: time="2025-11-05T15:48:45.444809118Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:48:45.444993 containerd[1603]: time="2025-11-05T15:48:45.444873643Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:48:45.445289 kubelet[2780]: E1105 15:48:45.445176 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:48:45.445289 kubelet[2780]: E1105 15:48:45.445246 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:48:45.445770 kubelet[2780]: E1105 15:48:45.445670 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2cssb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6644f9d4c6-74nvz_calico-apiserver(e61d1196-bf4a-4bdd-877c-9ea9a871d23c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:48:45.446911 kubelet[2780]: E1105 15:48:45.446870 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-74nvz" podUID="e61d1196-bf4a-4bdd-877c-9ea9a871d23c" Nov 5 15:48:49.522118 containerd[1603]: time="2025-11-05T15:48:49.521938207Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eda5c7ca199d3457cdfe11ffbafed3e25a312cad63fa0f4e3c46540755edf51d\" id:\"27a3d8ab028134c7b5f4bb13a567f314a87099a8bb5c834ed583373360d8e8e2\" pid:4830 exited_at:{seconds:1762357729 nanos:521432540}" Nov 5 15:48:49.525093 kubelet[2780]: E1105 15:48:49.525064 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:48:50.313860 kubelet[2780]: E1105 15:48:50.313448 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwjt" podUID="3b4571ca-2142-4ad0-85e2-e8b00b2fb524" Nov 5 15:48:51.315352 kubelet[2780]: E1105 15:48:51.315250 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d78f875fb-jwnjz" podUID="1ae08b84-bd20-47be-a4e3-39130515cbd3" Nov 5 15:48:52.313214 kubelet[2780]: E1105 15:48:52.313169 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fc44484bc-vfb78" podUID="e90854ce-cf6e-4a39-9e2a-1f06e654f065" Nov 5 15:48:56.313551 kubelet[2780]: E1105 15:48:56.313486 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-px2b8" podUID="d8afeb1c-714d-4335-9a1d-a1135daaa2b3" Nov 5 15:48:57.316936 kubelet[2780]: E1105 15:48:57.315860 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6slgh" podUID="175c15d8-2ca8-4a9b-b355-438a1e3fa9fd" Nov 5 15:48:59.319266 kubelet[2780]: E1105 15:48:59.318977 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-74nvz" podUID="e61d1196-bf4a-4bdd-877c-9ea9a871d23c" Nov 5 15:49:04.313877 containerd[1603]: time="2025-11-05T15:49:04.313817570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:49:04.486874 containerd[1603]: time="2025-11-05T15:49:04.486792894Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:04.488118 containerd[1603]: time="2025-11-05T15:49:04.488052992Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:49:04.488237 containerd[1603]: time="2025-11-05T15:49:04.488079861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:49:04.488661 kubelet[2780]: E1105 15:49:04.488607 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:49:04.488661 kubelet[2780]: E1105 15:49:04.488667 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:49:04.490059 kubelet[2780]: E1105 15:49:04.489447 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3624bb609f634714aab3714467b41e19,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-knk57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d78f875fb-jwnjz_calico-system(1ae08b84-bd20-47be-a4e3-39130515cbd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:04.490415 containerd[1603]: time="2025-11-05T15:49:04.490387611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:49:04.645476 containerd[1603]: time="2025-11-05T15:49:04.644869851Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:04.646353 containerd[1603]: time="2025-11-05T15:49:04.646297674Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:49:04.646456 containerd[1603]: time="2025-11-05T15:49:04.646427201Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:49:04.647008 kubelet[2780]: E1105 15:49:04.646953 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:49:04.647104 kubelet[2780]: E1105 15:49:04.647037 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:49:04.647475 kubelet[2780]: E1105 15:49:04.647289 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtp75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5fc44484bc-vfb78_calico-system(e90854ce-cf6e-4a39-9e2a-1f06e654f065): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:04.648817 containerd[1603]: time="2025-11-05T15:49:04.648776421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:49:04.649892 kubelet[2780]: E1105 15:49:04.649825 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fc44484bc-vfb78" podUID="e90854ce-cf6e-4a39-9e2a-1f06e654f065" Nov 5 15:49:04.801007 containerd[1603]: time="2025-11-05T15:49:04.800957540Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:04.802160 containerd[1603]: time="2025-11-05T15:49:04.802061832Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:49:04.802160 containerd[1603]: time="2025-11-05T15:49:04.802087691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:49:04.802397 kubelet[2780]: E1105 15:49:04.802359 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:49:04.802483 kubelet[2780]: E1105 15:49:04.802411 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:49:04.802748 kubelet[2780]: E1105 15:49:04.802576 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-knk57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d78f875fb-jwnjz_calico-system(1ae08b84-bd20-47be-a4e3-39130515cbd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:04.803769 kubelet[2780]: E1105 15:49:04.803741 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d78f875fb-jwnjz" podUID="1ae08b84-bd20-47be-a4e3-39130515cbd3" Nov 5 15:49:05.313629 kubelet[2780]: E1105 15:49:05.313588 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:49:05.316372 containerd[1603]: time="2025-11-05T15:49:05.316340020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:49:05.523612 containerd[1603]: time="2025-11-05T15:49:05.523546943Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:05.524740 containerd[1603]: time="2025-11-05T15:49:05.524671306Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:49:05.529834 containerd[1603]: time="2025-11-05T15:49:05.524699495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:49:05.529901 kubelet[2780]: E1105 15:49:05.524865 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:49:05.529901 kubelet[2780]: E1105 15:49:05.524907 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:49:05.529901 kubelet[2780]: E1105 15:49:05.525006 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ktptk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qtwjt_calico-system(3b4571ca-2142-4ad0-85e2-e8b00b2fb524): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:05.529901 kubelet[2780]: E1105 15:49:05.528107 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwjt" podUID="3b4571ca-2142-4ad0-85e2-e8b00b2fb524" Nov 5 15:49:08.314487 containerd[1603]: time="2025-11-05T15:49:08.314410148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:49:08.448957 containerd[1603]: time="2025-11-05T15:49:08.448895641Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:08.450288 containerd[1603]: time="2025-11-05T15:49:08.450110057Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:49:08.450288 containerd[1603]: time="2025-11-05T15:49:08.450210304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:49:08.450706 kubelet[2780]: E1105 15:49:08.450462 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:49:08.451299 kubelet[2780]: E1105 15:49:08.450709 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:49:08.451299 kubelet[2780]: E1105 15:49:08.451014 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nrvlm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6644f9d4c6-px2b8_calico-apiserver(d8afeb1c-714d-4335-9a1d-a1135daaa2b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:08.452518 kubelet[2780]: E1105 15:49:08.452266 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-px2b8" podUID="d8afeb1c-714d-4335-9a1d-a1135daaa2b3" Nov 5 15:49:10.312741 kubelet[2780]: E1105 15:49:10.311693 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:49:12.315882 containerd[1603]: time="2025-11-05T15:49:12.315779917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:49:12.467243 containerd[1603]: time="2025-11-05T15:49:12.467196758Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:12.468424 containerd[1603]: time="2025-11-05T15:49:12.468371269Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:49:12.468601 containerd[1603]: time="2025-11-05T15:49:12.468437098Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:49:12.468637 kubelet[2780]: E1105 15:49:12.468559 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:49:12.468637 kubelet[2780]: E1105 15:49:12.468602 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:49:12.469169 kubelet[2780]: E1105 15:49:12.468701 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k87mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6slgh_calico-system(175c15d8-2ca8-4a9b-b355-438a1e3fa9fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:12.472114 containerd[1603]: time="2025-11-05T15:49:12.472085790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:49:12.596961 containerd[1603]: time="2025-11-05T15:49:12.596135073Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:12.597647 containerd[1603]: time="2025-11-05T15:49:12.597543712Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:49:12.597647 containerd[1603]: time="2025-11-05T15:49:12.597580311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:49:12.597890 kubelet[2780]: E1105 15:49:12.597843 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:49:12.597890 kubelet[2780]: E1105 15:49:12.597893 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:49:12.598071 kubelet[2780]: E1105 15:49:12.598001 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k87mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6slgh_calico-system(175c15d8-2ca8-4a9b-b355-438a1e3fa9fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:12.599263 kubelet[2780]: E1105 15:49:12.599233 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6slgh" podUID="175c15d8-2ca8-4a9b-b355-438a1e3fa9fd" Nov 5 15:49:13.317866 containerd[1603]: time="2025-11-05T15:49:13.317816772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:49:13.455067 containerd[1603]: time="2025-11-05T15:49:13.454947515Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:13.456987 containerd[1603]: time="2025-11-05T15:49:13.456749578Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:49:13.456987 containerd[1603]: time="2025-11-05T15:49:13.456856387Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:49:13.457760 kubelet[2780]: E1105 15:49:13.457455 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:49:13.457760 kubelet[2780]: E1105 15:49:13.457507 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:49:13.457760 kubelet[2780]: E1105 15:49:13.457612 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2cssb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6644f9d4c6-74nvz_calico-apiserver(e61d1196-bf4a-4bdd-877c-9ea9a871d23c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:13.458848 kubelet[2780]: E1105 15:49:13.458818 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-74nvz" podUID="e61d1196-bf4a-4bdd-877c-9ea9a871d23c" Nov 5 15:49:17.314022 kubelet[2780]: E1105 15:49:17.313956 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwjt" podUID="3b4571ca-2142-4ad0-85e2-e8b00b2fb524" Nov 5 15:49:19.312120 kubelet[2780]: E1105 15:49:19.312050 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:49:19.315091 kubelet[2780]: E1105 15:49:19.314943 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fc44484bc-vfb78" podUID="e90854ce-cf6e-4a39-9e2a-1f06e654f065" Nov 5 15:49:19.315528 kubelet[2780]: E1105 15:49:19.315467 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d78f875fb-jwnjz" podUID="1ae08b84-bd20-47be-a4e3-39130515cbd3" Nov 5 15:49:19.521999 containerd[1603]: time="2025-11-05T15:49:19.521943286Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eda5c7ca199d3457cdfe11ffbafed3e25a312cad63fa0f4e3c46540755edf51d\" id:\"645714e2b76f127ea67c1f41d7b1056f3bc142f5d1a536a26d6a80736b6ffdbf\" pid:4868 exited_at:{seconds:1762357759 nanos:521387653}" Nov 5 15:49:20.313326 kubelet[2780]: E1105 15:49:20.313248 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:49:20.314647 kubelet[2780]: E1105 15:49:20.314400 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-px2b8" podUID="d8afeb1c-714d-4335-9a1d-a1135daaa2b3" Nov 5 15:49:23.315917 kubelet[2780]: E1105 15:49:23.315800 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6slgh" podUID="175c15d8-2ca8-4a9b-b355-438a1e3fa9fd" Nov 5 15:49:26.314900 kubelet[2780]: E1105 15:49:26.314354 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-74nvz" podUID="e61d1196-bf4a-4bdd-877c-9ea9a871d23c" Nov 5 15:49:30.315686 kubelet[2780]: E1105 15:49:30.315139 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwjt" podUID="3b4571ca-2142-4ad0-85e2-e8b00b2fb524" Nov 5 15:49:30.316657 kubelet[2780]: E1105 15:49:30.316239 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d78f875fb-jwnjz" podUID="1ae08b84-bd20-47be-a4e3-39130515cbd3" Nov 5 15:49:32.314443 kubelet[2780]: E1105 15:49:32.314357 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fc44484bc-vfb78" podUID="e90854ce-cf6e-4a39-9e2a-1f06e654f065" Nov 5 15:49:34.319749 kubelet[2780]: E1105 15:49:34.319049 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-px2b8" podUID="d8afeb1c-714d-4335-9a1d-a1135daaa2b3" Nov 5 15:49:36.316544 kubelet[2780]: E1105 15:49:36.315017 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6slgh" podUID="175c15d8-2ca8-4a9b-b355-438a1e3fa9fd" Nov 5 15:49:37.315755 kubelet[2780]: E1105 15:49:37.315691 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:49:39.315789 kubelet[2780]: E1105 15:49:39.315195 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-74nvz" podUID="e61d1196-bf4a-4bdd-877c-9ea9a871d23c" Nov 5 15:49:42.312462 kubelet[2780]: E1105 15:49:42.312113 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwjt" podUID="3b4571ca-2142-4ad0-85e2-e8b00b2fb524" Nov 5 15:49:43.315772 kubelet[2780]: E1105 15:49:43.315565 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:49:43.318255 kubelet[2780]: E1105 15:49:43.317541 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d78f875fb-jwnjz" podUID="1ae08b84-bd20-47be-a4e3-39130515cbd3" Nov 5 15:49:45.314424 containerd[1603]: time="2025-11-05T15:49:45.314320122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:49:45.468180 containerd[1603]: time="2025-11-05T15:49:45.468106795Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:45.470743 containerd[1603]: time="2025-11-05T15:49:45.469435590Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:49:45.470965 containerd[1603]: time="2025-11-05T15:49:45.469667722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:49:45.471079 kubelet[2780]: E1105 15:49:45.471030 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:49:45.471602 kubelet[2780]: E1105 15:49:45.471093 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:49:45.471602 kubelet[2780]: E1105 15:49:45.471202 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtp75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5fc44484bc-vfb78_calico-system(e90854ce-cf6e-4a39-9e2a-1f06e654f065): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:45.472755 kubelet[2780]: E1105 15:49:45.472709 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fc44484bc-vfb78" podUID="e90854ce-cf6e-4a39-9e2a-1f06e654f065" Nov 5 15:49:47.314624 kubelet[2780]: E1105 15:49:47.314593 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:49:47.318693 kubelet[2780]: E1105 15:49:47.318397 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6slgh" podUID="175c15d8-2ca8-4a9b-b355-438a1e3fa9fd" Nov 5 15:49:48.312959 kubelet[2780]: E1105 15:49:48.312886 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-px2b8" podUID="d8afeb1c-714d-4335-9a1d-a1135daaa2b3" Nov 5 15:49:49.537320 containerd[1603]: time="2025-11-05T15:49:49.537281382Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eda5c7ca199d3457cdfe11ffbafed3e25a312cad63fa0f4e3c46540755edf51d\" id:\"9c498f686d6a8b52f13a79b58a5b84abd3e9ad53faa86ee56d09f75b4c1527de\" pid:4907 exited_at:{seconds:1762357789 nanos:536986745}" Nov 5 15:49:54.315068 containerd[1603]: time="2025-11-05T15:49:54.314984792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:49:54.484002 containerd[1603]: time="2025-11-05T15:49:54.483917343Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:54.485139 containerd[1603]: time="2025-11-05T15:49:54.485100162Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:49:54.485261 containerd[1603]: time="2025-11-05T15:49:54.485193551Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:49:54.485583 kubelet[2780]: E1105 15:49:54.485485 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:49:54.486135 kubelet[2780]: E1105 15:49:54.485597 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:49:54.486135 kubelet[2780]: E1105 15:49:54.485974 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2cssb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6644f9d4c6-74nvz_calico-apiserver(e61d1196-bf4a-4bdd-877c-9ea9a871d23c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:54.488025 kubelet[2780]: E1105 15:49:54.487990 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-74nvz" podUID="e61d1196-bf4a-4bdd-877c-9ea9a871d23c" Nov 5 15:49:54.488436 containerd[1603]: time="2025-11-05T15:49:54.488291213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:49:54.622817 containerd[1603]: time="2025-11-05T15:49:54.622621592Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:54.623662 containerd[1603]: time="2025-11-05T15:49:54.623613035Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:49:54.623777 containerd[1603]: time="2025-11-05T15:49:54.623640451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:49:54.624027 kubelet[2780]: E1105 15:49:54.623968 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:49:54.624027 kubelet[2780]: E1105 15:49:54.624025 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:49:54.624197 kubelet[2780]: E1105 15:49:54.624154 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ktptk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qtwjt_calico-system(3b4571ca-2142-4ad0-85e2-e8b00b2fb524): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:54.625875 kubelet[2780]: E1105 15:49:54.625809 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwjt" podUID="3b4571ca-2142-4ad0-85e2-e8b00b2fb524" Nov 5 15:49:55.317719 containerd[1603]: time="2025-11-05T15:49:55.317681038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:49:55.463873 containerd[1603]: time="2025-11-05T15:49:55.463831526Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:55.465353 containerd[1603]: time="2025-11-05T15:49:55.465089770Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:49:55.465529 containerd[1603]: time="2025-11-05T15:49:55.465513816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:49:55.465799 kubelet[2780]: E1105 15:49:55.465757 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:49:55.465930 kubelet[2780]: E1105 15:49:55.465914 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:49:55.466266 kubelet[2780]: E1105 15:49:55.466233 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3624bb609f634714aab3714467b41e19,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-knk57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d78f875fb-jwnjz_calico-system(1ae08b84-bd20-47be-a4e3-39130515cbd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:55.469901 containerd[1603]: time="2025-11-05T15:49:55.469670678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:49:55.631824 containerd[1603]: time="2025-11-05T15:49:55.631676569Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:49:55.633544 containerd[1603]: time="2025-11-05T15:49:55.633427740Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:49:55.633544 containerd[1603]: time="2025-11-05T15:49:55.633481904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:49:55.633972 kubelet[2780]: E1105 15:49:55.633912 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:49:55.634698 kubelet[2780]: E1105 15:49:55.634073 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:49:55.635006 kubelet[2780]: E1105 15:49:55.634936 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-knk57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d78f875fb-jwnjz_calico-system(1ae08b84-bd20-47be-a4e3-39130515cbd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:49:55.636217 kubelet[2780]: E1105 15:49:55.636175 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d78f875fb-jwnjz" podUID="1ae08b84-bd20-47be-a4e3-39130515cbd3" Nov 5 15:49:56.312343 kubelet[2780]: E1105 15:49:56.312271 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fc44484bc-vfb78" podUID="e90854ce-cf6e-4a39-9e2a-1f06e654f065" Nov 5 15:50:02.314495 containerd[1603]: time="2025-11-05T15:50:02.314450707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:50:02.448111 containerd[1603]: time="2025-11-05T15:50:02.447959984Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:50:02.449146 containerd[1603]: time="2025-11-05T15:50:02.449060319Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:50:02.449324 containerd[1603]: time="2025-11-05T15:50:02.449184516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:50:02.449477 kubelet[2780]: E1105 15:50:02.449405 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:50:02.450916 kubelet[2780]: E1105 15:50:02.449486 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:50:02.450916 kubelet[2780]: E1105 15:50:02.450202 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k87mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6slgh_calico-system(175c15d8-2ca8-4a9b-b355-438a1e3fa9fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:50:02.451031 containerd[1603]: time="2025-11-05T15:50:02.450052095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:50:02.644860 containerd[1603]: time="2025-11-05T15:50:02.643130463Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:50:02.645227 containerd[1603]: time="2025-11-05T15:50:02.645141482Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:50:02.645282 containerd[1603]: time="2025-11-05T15:50:02.645263738Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:50:02.645489 kubelet[2780]: E1105 15:50:02.645446 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:50:02.645537 kubelet[2780]: E1105 15:50:02.645523 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:50:02.646008 containerd[1603]: time="2025-11-05T15:50:02.645968455Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:50:02.646312 kubelet[2780]: E1105 15:50:02.646257 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nrvlm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6644f9d4c6-px2b8_calico-apiserver(d8afeb1c-714d-4335-9a1d-a1135daaa2b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:50:02.647926 kubelet[2780]: E1105 15:50:02.647901 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-px2b8" podUID="d8afeb1c-714d-4335-9a1d-a1135daaa2b3" Nov 5 15:50:02.771219 containerd[1603]: time="2025-11-05T15:50:02.771036746Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:50:02.773806 containerd[1603]: time="2025-11-05T15:50:02.772329120Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:50:02.773991 containerd[1603]: time="2025-11-05T15:50:02.773916784Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:50:02.774214 kubelet[2780]: E1105 15:50:02.774156 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:50:02.774275 kubelet[2780]: E1105 15:50:02.774238 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:50:02.774574 kubelet[2780]: E1105 15:50:02.774382 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k87mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6slgh_calico-system(175c15d8-2ca8-4a9b-b355-438a1e3fa9fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:50:02.775708 kubelet[2780]: E1105 15:50:02.775660 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6slgh" podUID="175c15d8-2ca8-4a9b-b355-438a1e3fa9fd" Nov 5 15:50:07.313789 kubelet[2780]: E1105 15:50:07.313154 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwjt" podUID="3b4571ca-2142-4ad0-85e2-e8b00b2fb524" Nov 5 15:50:07.313789 kubelet[2780]: E1105 15:50:07.313209 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fc44484bc-vfb78" podUID="e90854ce-cf6e-4a39-9e2a-1f06e654f065" Nov 5 15:50:08.314436 kubelet[2780]: E1105 15:50:08.313763 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-74nvz" podUID="e61d1196-bf4a-4bdd-877c-9ea9a871d23c" Nov 5 15:50:08.315455 kubelet[2780]: E1105 15:50:08.315378 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d78f875fb-jwnjz" podUID="1ae08b84-bd20-47be-a4e3-39130515cbd3" Nov 5 15:50:13.315477 kubelet[2780]: E1105 15:50:13.315397 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-px2b8" podUID="d8afeb1c-714d-4335-9a1d-a1135daaa2b3" Nov 5 15:50:14.312356 kubelet[2780]: E1105 15:50:14.312031 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:50:14.312356 kubelet[2780]: E1105 15:50:14.312190 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:50:17.314749 kubelet[2780]: E1105 15:50:17.314130 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6slgh" podUID="175c15d8-2ca8-4a9b-b355-438a1e3fa9fd" Nov 5 15:50:18.314012 kubelet[2780]: E1105 15:50:18.313944 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwjt" podUID="3b4571ca-2142-4ad0-85e2-e8b00b2fb524" Nov 5 15:50:19.317539 kubelet[2780]: E1105 15:50:19.317408 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d78f875fb-jwnjz" podUID="1ae08b84-bd20-47be-a4e3-39130515cbd3" Nov 5 15:50:19.547274 containerd[1603]: time="2025-11-05T15:50:19.547209811Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eda5c7ca199d3457cdfe11ffbafed3e25a312cad63fa0f4e3c46540755edf51d\" id:\"e7a52010685e6ea83f143adc10a4ec1b4bf706071333c581354ae909553fb569\" pid:4954 exited_at:{seconds:1762357819 nanos:546525320}" Nov 5 15:50:20.314182 kubelet[2780]: E1105 15:50:20.313831 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-74nvz" podUID="e61d1196-bf4a-4bdd-877c-9ea9a871d23c" Nov 5 15:50:21.313593 kubelet[2780]: E1105 15:50:21.312956 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fc44484bc-vfb78" podUID="e90854ce-cf6e-4a39-9e2a-1f06e654f065" Nov 5 15:50:28.312100 kubelet[2780]: E1105 15:50:28.311854 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:50:28.313671 kubelet[2780]: E1105 15:50:28.313534 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-px2b8" podUID="d8afeb1c-714d-4335-9a1d-a1135daaa2b3" Nov 5 15:50:30.313401 kubelet[2780]: E1105 15:50:30.313112 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwjt" podUID="3b4571ca-2142-4ad0-85e2-e8b00b2fb524" Nov 5 15:50:30.314806 kubelet[2780]: E1105 15:50:30.313775 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d78f875fb-jwnjz" podUID="1ae08b84-bd20-47be-a4e3-39130515cbd3" Nov 5 15:50:31.316356 kubelet[2780]: E1105 15:50:31.315699 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6slgh" podUID="175c15d8-2ca8-4a9b-b355-438a1e3fa9fd" Nov 5 15:50:33.315192 kubelet[2780]: E1105 15:50:33.314043 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-74nvz" podUID="e61d1196-bf4a-4bdd-877c-9ea9a871d23c" Nov 5 15:50:34.312580 kubelet[2780]: E1105 15:50:34.312285 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:50:34.313226 kubelet[2780]: E1105 15:50:34.313197 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fc44484bc-vfb78" podUID="e90854ce-cf6e-4a39-9e2a-1f06e654f065" Nov 5 15:50:41.314878 kubelet[2780]: E1105 15:50:41.314824 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d78f875fb-jwnjz" podUID="1ae08b84-bd20-47be-a4e3-39130515cbd3" Nov 5 15:50:42.313765 kubelet[2780]: E1105 15:50:42.313318 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-px2b8" podUID="d8afeb1c-714d-4335-9a1d-a1135daaa2b3" Nov 5 15:50:42.313765 kubelet[2780]: E1105 15:50:42.313658 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6slgh" podUID="175c15d8-2ca8-4a9b-b355-438a1e3fa9fd" Nov 5 15:50:43.314601 kubelet[2780]: E1105 15:50:43.313985 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwjt" podUID="3b4571ca-2142-4ad0-85e2-e8b00b2fb524" Nov 5 15:50:47.313718 kubelet[2780]: E1105 15:50:47.313328 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:50:48.312682 kubelet[2780]: E1105 15:50:48.312620 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-74nvz" podUID="e61d1196-bf4a-4bdd-877c-9ea9a871d23c" Nov 5 15:50:49.315408 kubelet[2780]: E1105 15:50:49.314952 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fc44484bc-vfb78" podUID="e90854ce-cf6e-4a39-9e2a-1f06e654f065" Nov 5 15:50:49.318218 kubelet[2780]: E1105 15:50:49.318159 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:50:49.548433 containerd[1603]: time="2025-11-05T15:50:49.548360076Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eda5c7ca199d3457cdfe11ffbafed3e25a312cad63fa0f4e3c46540755edf51d\" id:\"b95f3a297aa3ca5f602e552917b1b8755494bdc3ef410464b86ea1aa79c95560\" pid:4983 exited_at:{seconds:1762357849 nanos:547810949}" Nov 5 15:50:50.312511 kubelet[2780]: E1105 15:50:50.312442 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:50:53.313576 kubelet[2780]: E1105 15:50:53.313483 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-px2b8" podUID="d8afeb1c-714d-4335-9a1d-a1135daaa2b3" Nov 5 15:50:54.315075 kubelet[2780]: E1105 15:50:54.315013 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d78f875fb-jwnjz" podUID="1ae08b84-bd20-47be-a4e3-39130515cbd3" Nov 5 15:50:55.313475 kubelet[2780]: E1105 15:50:55.313052 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwjt" podUID="3b4571ca-2142-4ad0-85e2-e8b00b2fb524" Nov 5 15:50:55.315073 kubelet[2780]: E1105 15:50:55.314990 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6slgh" podUID="175c15d8-2ca8-4a9b-b355-438a1e3fa9fd" Nov 5 15:50:57.406117 systemd[1]: Started sshd@7-172.239.60.160:22-139.178.89.65:37682.service - OpenSSH per-connection server daemon (139.178.89.65:37682). Nov 5 15:50:57.780942 sshd[5000]: Accepted publickey for core from 139.178.89.65 port 37682 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:50:57.785261 sshd-session[5000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:50:57.793173 systemd-logind[1580]: New session 8 of user core. Nov 5 15:50:57.800375 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 15:50:58.134919 sshd[5003]: Connection closed by 139.178.89.65 port 37682 Nov 5 15:50:58.135697 sshd-session[5000]: pam_unix(sshd:session): session closed for user core Nov 5 15:50:58.141436 systemd[1]: sshd@7-172.239.60.160:22-139.178.89.65:37682.service: Deactivated successfully. Nov 5 15:50:58.144670 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 15:50:58.146355 systemd-logind[1580]: Session 8 logged out. Waiting for processes to exit. Nov 5 15:50:58.148832 systemd-logind[1580]: Removed session 8. Nov 5 15:51:00.312616 kubelet[2780]: E1105 15:51:00.312546 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-74nvz" podUID="e61d1196-bf4a-4bdd-877c-9ea9a871d23c" Nov 5 15:51:03.192072 systemd[1]: Started sshd@8-172.239.60.160:22-139.178.89.65:37692.service - OpenSSH per-connection server daemon (139.178.89.65:37692). Nov 5 15:51:03.314210 kubelet[2780]: E1105 15:51:03.313902 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fc44484bc-vfb78" podUID="e90854ce-cf6e-4a39-9e2a-1f06e654f065" Nov 5 15:51:03.515611 sshd[5023]: Accepted publickey for core from 139.178.89.65 port 37692 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:51:03.517233 sshd-session[5023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:03.522547 systemd-logind[1580]: New session 9 of user core. Nov 5 15:51:03.527850 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 15:51:03.820305 sshd[5026]: Connection closed by 139.178.89.65 port 37692 Nov 5 15:51:03.821024 sshd-session[5023]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:03.826531 systemd[1]: sshd@8-172.239.60.160:22-139.178.89.65:37692.service: Deactivated successfully. Nov 5 15:51:03.828904 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 15:51:03.831598 systemd-logind[1580]: Session 9 logged out. Waiting for processes to exit. Nov 5 15:51:03.832959 systemd-logind[1580]: Removed session 9. Nov 5 15:51:07.314754 kubelet[2780]: E1105 15:51:07.314488 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:51:08.312641 kubelet[2780]: E1105 15:51:08.312530 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-px2b8" podUID="d8afeb1c-714d-4335-9a1d-a1135daaa2b3" Nov 5 15:51:08.881964 systemd[1]: Started sshd@9-172.239.60.160:22-139.178.89.65:47046.service - OpenSSH per-connection server daemon (139.178.89.65:47046). Nov 5 15:51:09.220357 sshd[5039]: Accepted publickey for core from 139.178.89.65 port 47046 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:51:09.222852 sshd-session[5039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:09.232047 systemd-logind[1580]: New session 10 of user core. Nov 5 15:51:09.236906 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 15:51:09.314557 kubelet[2780]: E1105 15:51:09.314509 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6slgh" podUID="175c15d8-2ca8-4a9b-b355-438a1e3fa9fd" Nov 5 15:51:09.314980 kubelet[2780]: E1105 15:51:09.314705 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d78f875fb-jwnjz" podUID="1ae08b84-bd20-47be-a4e3-39130515cbd3" Nov 5 15:51:09.534561 sshd[5042]: Connection closed by 139.178.89.65 port 47046 Nov 5 15:51:09.535150 sshd-session[5039]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:09.541271 systemd[1]: sshd@9-172.239.60.160:22-139.178.89.65:47046.service: Deactivated successfully. Nov 5 15:51:09.544581 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 15:51:09.546211 systemd-logind[1580]: Session 10 logged out. Waiting for processes to exit. Nov 5 15:51:09.548767 systemd-logind[1580]: Removed session 10. Nov 5 15:51:10.312328 kubelet[2780]: E1105 15:51:10.312198 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwjt" podUID="3b4571ca-2142-4ad0-85e2-e8b00b2fb524" Nov 5 15:51:14.599064 systemd[1]: Started sshd@10-172.239.60.160:22-139.178.89.65:47056.service - OpenSSH per-connection server daemon (139.178.89.65:47056). Nov 5 15:51:14.934694 sshd[5058]: Accepted publickey for core from 139.178.89.65 port 47056 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:51:14.937111 sshd-session[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:14.942934 systemd-logind[1580]: New session 11 of user core. Nov 5 15:51:14.947851 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 15:51:15.233859 sshd[5061]: Connection closed by 139.178.89.65 port 47056 Nov 5 15:51:15.235129 sshd-session[5058]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:15.239767 systemd-logind[1580]: Session 11 logged out. Waiting for processes to exit. Nov 5 15:51:15.242109 systemd[1]: sshd@10-172.239.60.160:22-139.178.89.65:47056.service: Deactivated successfully. Nov 5 15:51:15.245286 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 15:51:15.248090 systemd-logind[1580]: Removed session 11. Nov 5 15:51:15.298453 systemd[1]: Started sshd@11-172.239.60.160:22-139.178.89.65:47062.service - OpenSSH per-connection server daemon (139.178.89.65:47062). Nov 5 15:51:15.321748 containerd[1603]: time="2025-11-05T15:51:15.318207550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:51:15.468714 containerd[1603]: time="2025-11-05T15:51:15.468636442Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:51:15.469941 containerd[1603]: time="2025-11-05T15:51:15.469904380Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:51:15.470136 containerd[1603]: time="2025-11-05T15:51:15.470051766Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:51:15.470540 kubelet[2780]: E1105 15:51:15.470484 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:51:15.471242 kubelet[2780]: E1105 15:51:15.471189 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:51:15.471635 kubelet[2780]: E1105 15:51:15.471504 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2cssb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6644f9d4c6-74nvz_calico-apiserver(e61d1196-bf4a-4bdd-877c-9ea9a871d23c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:51:15.472886 kubelet[2780]: E1105 15:51:15.472857 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-74nvz" podUID="e61d1196-bf4a-4bdd-877c-9ea9a871d23c" Nov 5 15:51:15.653944 sshd[5074]: Accepted publickey for core from 139.178.89.65 port 47062 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:51:15.655530 sshd-session[5074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:15.661075 systemd-logind[1580]: New session 12 of user core. Nov 5 15:51:15.665855 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 15:51:15.992379 sshd[5077]: Connection closed by 139.178.89.65 port 47062 Nov 5 15:51:15.994010 sshd-session[5074]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:15.998161 systemd[1]: sshd@11-172.239.60.160:22-139.178.89.65:47062.service: Deactivated successfully. Nov 5 15:51:16.000359 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 15:51:16.001283 systemd-logind[1580]: Session 12 logged out. Waiting for processes to exit. Nov 5 15:51:16.003310 systemd-logind[1580]: Removed session 12. Nov 5 15:51:16.053119 systemd[1]: Started sshd@12-172.239.60.160:22-139.178.89.65:47076.service - OpenSSH per-connection server daemon (139.178.89.65:47076). Nov 5 15:51:16.388875 sshd[5086]: Accepted publickey for core from 139.178.89.65 port 47076 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:51:16.391248 sshd-session[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:16.398558 systemd-logind[1580]: New session 13 of user core. Nov 5 15:51:16.405840 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 15:51:16.732492 sshd[5089]: Connection closed by 139.178.89.65 port 47076 Nov 5 15:51:16.733867 sshd-session[5086]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:16.739266 systemd-logind[1580]: Session 13 logged out. Waiting for processes to exit. Nov 5 15:51:16.741790 systemd[1]: sshd@12-172.239.60.160:22-139.178.89.65:47076.service: Deactivated successfully. Nov 5 15:51:16.746069 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 15:51:16.751457 systemd-logind[1580]: Removed session 13. Nov 5 15:51:18.313576 containerd[1603]: time="2025-11-05T15:51:18.313529489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:51:18.460916 containerd[1603]: time="2025-11-05T15:51:18.460851657Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:51:18.461846 containerd[1603]: time="2025-11-05T15:51:18.461810957Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:51:18.462020 containerd[1603]: time="2025-11-05T15:51:18.461888945Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:51:18.462234 kubelet[2780]: E1105 15:51:18.462165 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:51:18.463263 kubelet[2780]: E1105 15:51:18.462208 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:51:18.463263 kubelet[2780]: E1105 15:51:18.462530 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtp75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5fc44484bc-vfb78_calico-system(e90854ce-cf6e-4a39-9e2a-1f06e654f065): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:51:18.464282 kubelet[2780]: E1105 15:51:18.464240 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fc44484bc-vfb78" podUID="e90854ce-cf6e-4a39-9e2a-1f06e654f065" Nov 5 15:51:19.313132 kubelet[2780]: E1105 15:51:19.312866 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-px2b8" podUID="d8afeb1c-714d-4335-9a1d-a1135daaa2b3" Nov 5 15:51:19.631303 containerd[1603]: time="2025-11-05T15:51:19.631159978Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eda5c7ca199d3457cdfe11ffbafed3e25a312cad63fa0f4e3c46540755edf51d\" id:\"e3e5317871c08ea484e91e4d18183d35d807198660bc7df6c197f95f57ecc3b7\" pid:5111 exited_at:{seconds:1762357879 nanos:630660563}" Nov 5 15:51:20.312184 containerd[1603]: time="2025-11-05T15:51:20.312136698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:51:20.509074 containerd[1603]: time="2025-11-05T15:51:20.508796525Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:51:20.509894 containerd[1603]: time="2025-11-05T15:51:20.509834732Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:51:20.510045 containerd[1603]: time="2025-11-05T15:51:20.509853082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:51:20.510315 kubelet[2780]: E1105 15:51:20.510264 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:51:20.510757 kubelet[2780]: E1105 15:51:20.510319 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:51:20.510757 kubelet[2780]: E1105 15:51:20.510455 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3624bb609f634714aab3714467b41e19,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-knk57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d78f875fb-jwnjz_calico-system(1ae08b84-bd20-47be-a4e3-39130515cbd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:51:20.513284 containerd[1603]: time="2025-11-05T15:51:20.513133999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:51:20.739481 containerd[1603]: time="2025-11-05T15:51:20.739222934Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:51:20.740594 containerd[1603]: time="2025-11-05T15:51:20.740295571Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:51:20.740652 containerd[1603]: time="2025-11-05T15:51:20.740578611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:51:20.740832 kubelet[2780]: E1105 15:51:20.740802 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:51:20.740982 kubelet[2780]: E1105 15:51:20.740963 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:51:20.741143 kubelet[2780]: E1105 15:51:20.741111 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-knk57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d78f875fb-jwnjz_calico-system(1ae08b84-bd20-47be-a4e3-39130515cbd3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:51:20.742714 kubelet[2780]: E1105 15:51:20.742673 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d78f875fb-jwnjz" podUID="1ae08b84-bd20-47be-a4e3-39130515cbd3" Nov 5 15:51:21.313009 kubelet[2780]: E1105 15:51:21.312958 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:51:21.317357 containerd[1603]: time="2025-11-05T15:51:21.316915517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:51:21.455455 containerd[1603]: time="2025-11-05T15:51:21.455388097Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:51:21.457131 containerd[1603]: time="2025-11-05T15:51:21.456966348Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:51:21.457507 containerd[1603]: time="2025-11-05T15:51:21.457082634Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:51:21.457666 kubelet[2780]: E1105 15:51:21.457632 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:51:21.457791 kubelet[2780]: E1105 15:51:21.457774 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:51:21.457995 kubelet[2780]: E1105 15:51:21.457953 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ktptk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qtwjt_calico-system(3b4571ca-2142-4ad0-85e2-e8b00b2fb524): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:51:21.459988 kubelet[2780]: E1105 15:51:21.459966 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwjt" podUID="3b4571ca-2142-4ad0-85e2-e8b00b2fb524" Nov 5 15:51:21.793809 systemd[1]: Started sshd@13-172.239.60.160:22-139.178.89.65:51816.service - OpenSSH per-connection server daemon (139.178.89.65:51816). Nov 5 15:51:22.135751 sshd[5125]: Accepted publickey for core from 139.178.89.65 port 51816 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:51:22.138338 sshd-session[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:22.144266 systemd-logind[1580]: New session 14 of user core. Nov 5 15:51:22.150008 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 15:51:22.441449 sshd[5128]: Connection closed by 139.178.89.65 port 51816 Nov 5 15:51:22.442180 sshd-session[5125]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:22.448423 systemd[1]: sshd@13-172.239.60.160:22-139.178.89.65:51816.service: Deactivated successfully. Nov 5 15:51:22.451284 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 15:51:22.452375 systemd-logind[1580]: Session 14 logged out. Waiting for processes to exit. Nov 5 15:51:22.454035 systemd-logind[1580]: Removed session 14. Nov 5 15:51:24.313104 containerd[1603]: time="2025-11-05T15:51:24.313067486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:51:24.463705 containerd[1603]: time="2025-11-05T15:51:24.463634318Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:51:24.464819 containerd[1603]: time="2025-11-05T15:51:24.464691386Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:51:24.464819 containerd[1603]: time="2025-11-05T15:51:24.464791883Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:51:24.465124 kubelet[2780]: E1105 15:51:24.465061 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:51:24.465939 kubelet[2780]: E1105 15:51:24.465108 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:51:24.465939 kubelet[2780]: E1105 15:51:24.465899 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k87mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6slgh_calico-system(175c15d8-2ca8-4a9b-b355-438a1e3fa9fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:51:24.468239 containerd[1603]: time="2025-11-05T15:51:24.468045874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:51:24.596567 containerd[1603]: time="2025-11-05T15:51:24.596179322Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:51:24.597226 containerd[1603]: time="2025-11-05T15:51:24.597181681Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:51:24.597338 containerd[1603]: time="2025-11-05T15:51:24.597254279Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:51:24.597498 kubelet[2780]: E1105 15:51:24.597459 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:51:24.597548 kubelet[2780]: E1105 15:51:24.597511 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:51:24.597686 kubelet[2780]: E1105 15:51:24.597612 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k87mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6slgh_calico-system(175c15d8-2ca8-4a9b-b355-438a1e3fa9fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:51:24.598832 kubelet[2780]: E1105 15:51:24.598782 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6slgh" podUID="175c15d8-2ca8-4a9b-b355-438a1e3fa9fd" Nov 5 15:51:27.505931 systemd[1]: Started sshd@14-172.239.60.160:22-139.178.89.65:43396.service - OpenSSH per-connection server daemon (139.178.89.65:43396). Nov 5 15:51:27.854794 sshd[5146]: Accepted publickey for core from 139.178.89.65 port 43396 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:51:27.855426 sshd-session[5146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:27.863357 systemd-logind[1580]: New session 15 of user core. Nov 5 15:51:27.869867 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 15:51:28.167400 sshd[5149]: Connection closed by 139.178.89.65 port 43396 Nov 5 15:51:28.167976 sshd-session[5146]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:28.172459 systemd-logind[1580]: Session 15 logged out. Waiting for processes to exit. Nov 5 15:51:28.176418 systemd[1]: sshd@14-172.239.60.160:22-139.178.89.65:43396.service: Deactivated successfully. Nov 5 15:51:28.179991 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 15:51:28.182698 systemd-logind[1580]: Removed session 15. Nov 5 15:51:28.313647 kubelet[2780]: E1105 15:51:28.313415 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-74nvz" podUID="e61d1196-bf4a-4bdd-877c-9ea9a871d23c" Nov 5 15:51:31.315669 kubelet[2780]: E1105 15:51:31.315438 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fc44484bc-vfb78" podUID="e90854ce-cf6e-4a39-9e2a-1f06e654f065" Nov 5 15:51:31.318253 containerd[1603]: time="2025-11-05T15:51:31.318029654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:51:31.469688 containerd[1603]: time="2025-11-05T15:51:31.469507017Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:51:31.470628 containerd[1603]: time="2025-11-05T15:51:31.470514007Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:51:31.470628 containerd[1603]: time="2025-11-05T15:51:31.470597565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:51:31.471226 kubelet[2780]: E1105 15:51:31.470777 2780 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:51:31.471226 kubelet[2780]: E1105 15:51:31.470864 2780 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:51:31.471226 kubelet[2780]: E1105 15:51:31.471044 2780 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nrvlm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6644f9d4c6-px2b8_calico-apiserver(d8afeb1c-714d-4335-9a1d-a1135daaa2b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:51:31.472287 kubelet[2780]: E1105 15:51:31.472224 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-px2b8" podUID="d8afeb1c-714d-4335-9a1d-a1135daaa2b3" Nov 5 15:51:32.312741 kubelet[2780]: E1105 15:51:32.312639 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwjt" podUID="3b4571ca-2142-4ad0-85e2-e8b00b2fb524" Nov 5 15:51:33.223943 systemd[1]: Started sshd@15-172.239.60.160:22-139.178.89.65:43410.service - OpenSSH per-connection server daemon (139.178.89.65:43410). Nov 5 15:51:33.315558 kubelet[2780]: E1105 15:51:33.315513 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d78f875fb-jwnjz" podUID="1ae08b84-bd20-47be-a4e3-39130515cbd3" Nov 5 15:51:33.556036 sshd[5162]: Accepted publickey for core from 139.178.89.65 port 43410 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:51:33.561075 sshd-session[5162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:33.565534 systemd-logind[1580]: New session 16 of user core. Nov 5 15:51:33.573842 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 15:51:33.853366 sshd[5165]: Connection closed by 139.178.89.65 port 43410 Nov 5 15:51:33.854174 sshd-session[5162]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:33.858772 systemd-logind[1580]: Session 16 logged out. Waiting for processes to exit. Nov 5 15:51:33.860076 systemd[1]: sshd@15-172.239.60.160:22-139.178.89.65:43410.service: Deactivated successfully. Nov 5 15:51:33.862206 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 15:51:33.863125 systemd-logind[1580]: Removed session 16. Nov 5 15:51:34.312479 kubelet[2780]: E1105 15:51:34.311580 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:51:37.315949 kubelet[2780]: E1105 15:51:37.315861 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6slgh" podUID="175c15d8-2ca8-4a9b-b355-438a1e3fa9fd" Nov 5 15:51:38.914202 systemd[1]: Started sshd@16-172.239.60.160:22-139.178.89.65:33836.service - OpenSSH per-connection server daemon (139.178.89.65:33836). Nov 5 15:51:39.246349 sshd[5197]: Accepted publickey for core from 139.178.89.65 port 33836 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:51:39.248094 sshd-session[5197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:39.255640 systemd-logind[1580]: New session 17 of user core. Nov 5 15:51:39.262119 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 15:51:39.574275 sshd[5200]: Connection closed by 139.178.89.65 port 33836 Nov 5 15:51:39.575914 sshd-session[5197]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:39.580658 systemd-logind[1580]: Session 17 logged out. Waiting for processes to exit. Nov 5 15:51:39.583205 systemd[1]: sshd@16-172.239.60.160:22-139.178.89.65:33836.service: Deactivated successfully. Nov 5 15:51:39.587328 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 15:51:39.589358 systemd-logind[1580]: Removed session 17. Nov 5 15:51:40.311774 kubelet[2780]: E1105 15:51:40.311707 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:51:41.315586 kubelet[2780]: E1105 15:51:41.315520 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-74nvz" podUID="e61d1196-bf4a-4bdd-877c-9ea9a871d23c" Nov 5 15:51:42.313515 kubelet[2780]: E1105 15:51:42.313468 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-px2b8" podUID="d8afeb1c-714d-4335-9a1d-a1135daaa2b3" Nov 5 15:51:44.312842 kubelet[2780]: E1105 15:51:44.312776 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fc44484bc-vfb78" podUID="e90854ce-cf6e-4a39-9e2a-1f06e654f065" Nov 5 15:51:44.314015 kubelet[2780]: E1105 15:51:44.313240 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwjt" podUID="3b4571ca-2142-4ad0-85e2-e8b00b2fb524" Nov 5 15:51:44.642990 systemd[1]: Started sshd@17-172.239.60.160:22-139.178.89.65:33846.service - OpenSSH per-connection server daemon (139.178.89.65:33846). Nov 5 15:51:44.977900 sshd[5211]: Accepted publickey for core from 139.178.89.65 port 33846 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:51:44.979869 sshd-session[5211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:44.987932 systemd-logind[1580]: New session 18 of user core. Nov 5 15:51:44.994849 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 15:51:45.310869 sshd[5214]: Connection closed by 139.178.89.65 port 33846 Nov 5 15:51:45.311942 sshd-session[5211]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:45.319395 systemd-logind[1580]: Session 18 logged out. Waiting for processes to exit. Nov 5 15:51:45.320631 systemd[1]: sshd@17-172.239.60.160:22-139.178.89.65:33846.service: Deactivated successfully. Nov 5 15:51:45.322953 kubelet[2780]: E1105 15:51:45.322926 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d78f875fb-jwnjz" podUID="1ae08b84-bd20-47be-a4e3-39130515cbd3" Nov 5 15:51:45.323201 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 15:51:45.326207 systemd-logind[1580]: Removed session 18. Nov 5 15:51:45.367947 systemd[1]: Started sshd@18-172.239.60.160:22-139.178.89.65:33850.service - OpenSSH per-connection server daemon (139.178.89.65:33850). Nov 5 15:51:45.699448 sshd[5225]: Accepted publickey for core from 139.178.89.65 port 33850 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:51:45.700684 sshd-session[5225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:45.705883 systemd-logind[1580]: New session 19 of user core. Nov 5 15:51:45.711077 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 15:51:46.131102 sshd[5228]: Connection closed by 139.178.89.65 port 33850 Nov 5 15:51:46.131966 sshd-session[5225]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:46.138797 systemd-logind[1580]: Session 19 logged out. Waiting for processes to exit. Nov 5 15:51:46.139556 systemd[1]: sshd@18-172.239.60.160:22-139.178.89.65:33850.service: Deactivated successfully. Nov 5 15:51:46.142300 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 15:51:46.144285 systemd-logind[1580]: Removed session 19. Nov 5 15:51:46.195441 systemd[1]: Started sshd@19-172.239.60.160:22-139.178.89.65:46882.service - OpenSSH per-connection server daemon (139.178.89.65:46882). Nov 5 15:51:46.542966 sshd[5238]: Accepted publickey for core from 139.178.89.65 port 46882 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:51:46.543586 sshd-session[5238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:46.556440 systemd-logind[1580]: New session 20 of user core. Nov 5 15:51:46.558860 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 5 15:51:47.474137 sshd[5241]: Connection closed by 139.178.89.65 port 46882 Nov 5 15:51:47.472681 sshd-session[5238]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:47.477596 systemd[1]: sshd@19-172.239.60.160:22-139.178.89.65:46882.service: Deactivated successfully. Nov 5 15:51:47.480443 systemd[1]: session-20.scope: Deactivated successfully. Nov 5 15:51:47.482637 systemd-logind[1580]: Session 20 logged out. Waiting for processes to exit. Nov 5 15:51:47.486307 systemd-logind[1580]: Removed session 20. Nov 5 15:51:47.536413 systemd[1]: Started sshd@20-172.239.60.160:22-139.178.89.65:46888.service - OpenSSH per-connection server daemon (139.178.89.65:46888). Nov 5 15:51:47.874482 sshd[5260]: Accepted publickey for core from 139.178.89.65 port 46888 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:51:47.878325 sshd-session[5260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:47.884210 systemd-logind[1580]: New session 21 of user core. Nov 5 15:51:47.892842 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 5 15:51:48.332032 sshd[5263]: Connection closed by 139.178.89.65 port 46888 Nov 5 15:51:48.335937 sshd-session[5260]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:48.346315 systemd-logind[1580]: Session 21 logged out. Waiting for processes to exit. Nov 5 15:51:48.347092 systemd[1]: sshd@20-172.239.60.160:22-139.178.89.65:46888.service: Deactivated successfully. Nov 5 15:51:48.354890 systemd[1]: session-21.scope: Deactivated successfully. Nov 5 15:51:48.358038 systemd-logind[1580]: Removed session 21. Nov 5 15:51:48.393798 systemd[1]: Started sshd@21-172.239.60.160:22-139.178.89.65:46894.service - OpenSSH per-connection server daemon (139.178.89.65:46894). Nov 5 15:51:48.733259 sshd[5273]: Accepted publickey for core from 139.178.89.65 port 46894 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:51:48.735374 sshd-session[5273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:48.749331 systemd-logind[1580]: New session 22 of user core. Nov 5 15:51:48.753412 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 5 15:51:49.052424 sshd[5276]: Connection closed by 139.178.89.65 port 46894 Nov 5 15:51:49.053935 sshd-session[5273]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:49.063520 systemd-logind[1580]: Session 22 logged out. Waiting for processes to exit. Nov 5 15:51:49.064880 systemd[1]: sshd@21-172.239.60.160:22-139.178.89.65:46894.service: Deactivated successfully. Nov 5 15:51:49.068548 systemd[1]: session-22.scope: Deactivated successfully. Nov 5 15:51:49.073616 systemd-logind[1580]: Removed session 22. Nov 5 15:51:49.524981 containerd[1603]: time="2025-11-05T15:51:49.524927329Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eda5c7ca199d3457cdfe11ffbafed3e25a312cad63fa0f4e3c46540755edf51d\" id:\"7adf6147974d85e6dee208081f51d314d72a056c6a86f7e6e15e5aeda2bd28f6\" pid:5301 exited_at:{seconds:1762357909 nanos:524409093}" Nov 5 15:51:50.312425 kubelet[2780]: E1105 15:51:50.312367 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:51:50.317292 kubelet[2780]: E1105 15:51:50.317248 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6slgh" podUID="175c15d8-2ca8-4a9b-b355-438a1e3fa9fd" Nov 5 15:51:54.113420 systemd[1]: Started sshd@22-172.239.60.160:22-139.178.89.65:46904.service - OpenSSH per-connection server daemon (139.178.89.65:46904). Nov 5 15:51:54.314259 kubelet[2780]: E1105 15:51:54.314005 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-74nvz" podUID="e61d1196-bf4a-4bdd-877c-9ea9a871d23c" Nov 5 15:51:54.461013 sshd[5316]: Accepted publickey for core from 139.178.89.65 port 46904 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:51:54.462593 sshd-session[5316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:54.469290 systemd-logind[1580]: New session 23 of user core. Nov 5 15:51:54.473877 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 5 15:51:54.770752 sshd[5319]: Connection closed by 139.178.89.65 port 46904 Nov 5 15:51:54.771059 sshd-session[5316]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:54.778239 systemd-logind[1580]: Session 23 logged out. Waiting for processes to exit. Nov 5 15:51:54.780347 systemd[1]: sshd@22-172.239.60.160:22-139.178.89.65:46904.service: Deactivated successfully. Nov 5 15:51:54.783156 systemd[1]: session-23.scope: Deactivated successfully. Nov 5 15:51:54.786334 systemd-logind[1580]: Removed session 23. Nov 5 15:51:55.314606 kubelet[2780]: E1105 15:51:55.313701 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-px2b8" podUID="d8afeb1c-714d-4335-9a1d-a1135daaa2b3" Nov 5 15:51:55.315584 kubelet[2780]: E1105 15:51:55.315246 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fc44484bc-vfb78" podUID="e90854ce-cf6e-4a39-9e2a-1f06e654f065" Nov 5 15:51:56.313284 kubelet[2780]: E1105 15:51:56.313227 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwjt" podUID="3b4571ca-2142-4ad0-85e2-e8b00b2fb524" Nov 5 15:51:56.314802 kubelet[2780]: E1105 15:51:56.314772 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d78f875fb-jwnjz" podUID="1ae08b84-bd20-47be-a4e3-39130515cbd3" Nov 5 15:51:59.831630 systemd[1]: Started sshd@23-172.239.60.160:22-139.178.89.65:36298.service - OpenSSH per-connection server daemon (139.178.89.65:36298). Nov 5 15:52:00.166700 sshd[5333]: Accepted publickey for core from 139.178.89.65 port 36298 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:52:00.168376 sshd-session[5333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:00.175670 systemd-logind[1580]: New session 24 of user core. Nov 5 15:52:00.181846 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 5 15:52:00.312330 kubelet[2780]: E1105 15:52:00.312292 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:52:00.500945 sshd[5336]: Connection closed by 139.178.89.65 port 36298 Nov 5 15:52:00.501366 sshd-session[5333]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:00.507766 systemd[1]: sshd@23-172.239.60.160:22-139.178.89.65:36298.service: Deactivated successfully. Nov 5 15:52:00.510811 systemd[1]: session-24.scope: Deactivated successfully. Nov 5 15:52:00.512089 systemd-logind[1580]: Session 24 logged out. Waiting for processes to exit. Nov 5 15:52:00.513763 systemd-logind[1580]: Removed session 24. Nov 5 15:52:01.312966 kubelet[2780]: E1105 15:52:01.312842 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6slgh" podUID="175c15d8-2ca8-4a9b-b355-438a1e3fa9fd" Nov 5 15:52:03.312657 kubelet[2780]: E1105 15:52:03.312622 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:52:05.565120 systemd[1]: Started sshd@24-172.239.60.160:22-139.178.89.65:36304.service - OpenSSH per-connection server daemon (139.178.89.65:36304). Nov 5 15:52:05.890879 sshd[5349]: Accepted publickey for core from 139.178.89.65 port 36304 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:52:05.891394 sshd-session[5349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:05.903555 systemd-logind[1580]: New session 25 of user core. Nov 5 15:52:05.907141 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 5 15:52:06.234818 sshd[5352]: Connection closed by 139.178.89.65 port 36304 Nov 5 15:52:06.236237 sshd-session[5349]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:06.243505 systemd[1]: sshd@24-172.239.60.160:22-139.178.89.65:36304.service: Deactivated successfully. Nov 5 15:52:06.247522 systemd[1]: session-25.scope: Deactivated successfully. Nov 5 15:52:06.250320 systemd-logind[1580]: Session 25 logged out. Waiting for processes to exit. Nov 5 15:52:06.253630 systemd-logind[1580]: Removed session 25. Nov 5 15:52:07.312947 kubelet[2780]: E1105 15:52:07.312864 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-74nvz" podUID="e61d1196-bf4a-4bdd-877c-9ea9a871d23c" Nov 5 15:52:08.312797 kubelet[2780]: E1105 15:52:08.312697 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fc44484bc-vfb78" podUID="e90854ce-cf6e-4a39-9e2a-1f06e654f065" Nov 5 15:52:09.317779 kubelet[2780]: E1105 15:52:09.316617 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-px2b8" podUID="d8afeb1c-714d-4335-9a1d-a1135daaa2b3" Nov 5 15:52:09.321353 kubelet[2780]: E1105 15:52:09.321308 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwjt" podUID="3b4571ca-2142-4ad0-85e2-e8b00b2fb524" Nov 5 15:52:11.294828 systemd[1]: Started sshd@25-172.239.60.160:22-139.178.89.65:37112.service - OpenSSH per-connection server daemon (139.178.89.65:37112). Nov 5 15:52:11.317058 kubelet[2780]: E1105 15:52:11.315520 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d78f875fb-jwnjz" podUID="1ae08b84-bd20-47be-a4e3-39130515cbd3" Nov 5 15:52:11.629371 sshd[5365]: Accepted publickey for core from 139.178.89.65 port 37112 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:52:11.631136 sshd-session[5365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:11.637658 systemd-logind[1580]: New session 26 of user core. Nov 5 15:52:11.648066 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 5 15:52:11.941643 sshd[5369]: Connection closed by 139.178.89.65 port 37112 Nov 5 15:52:11.942316 sshd-session[5365]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:11.949667 systemd[1]: sshd@25-172.239.60.160:22-139.178.89.65:37112.service: Deactivated successfully. Nov 5 15:52:11.952691 systemd[1]: session-26.scope: Deactivated successfully. Nov 5 15:52:11.953926 systemd-logind[1580]: Session 26 logged out. Waiting for processes to exit. Nov 5 15:52:11.956695 systemd-logind[1580]: Removed session 26. Nov 5 15:52:14.313574 kubelet[2780]: E1105 15:52:14.313513 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6slgh" podUID="175c15d8-2ca8-4a9b-b355-438a1e3fa9fd" Nov 5 15:52:15.312959 kubelet[2780]: E1105 15:52:15.312469 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Nov 5 15:52:17.010568 systemd[1]: Started sshd@26-172.239.60.160:22-139.178.89.65:41240.service - OpenSSH per-connection server daemon (139.178.89.65:41240). Nov 5 15:52:17.353114 sshd[5381]: Accepted publickey for core from 139.178.89.65 port 41240 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:52:17.356609 sshd-session[5381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:17.370877 systemd-logind[1580]: New session 27 of user core. Nov 5 15:52:17.375627 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 5 15:52:17.685850 sshd[5384]: Connection closed by 139.178.89.65 port 41240 Nov 5 15:52:17.685087 sshd-session[5381]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:17.690356 systemd-logind[1580]: Session 27 logged out. Waiting for processes to exit. Nov 5 15:52:17.694087 systemd[1]: sshd@26-172.239.60.160:22-139.178.89.65:41240.service: Deactivated successfully. Nov 5 15:52:17.697706 systemd[1]: session-27.scope: Deactivated successfully. Nov 5 15:52:17.701633 systemd-logind[1580]: Removed session 27. Nov 5 15:52:19.529298 containerd[1603]: time="2025-11-05T15:52:19.528981700Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eda5c7ca199d3457cdfe11ffbafed3e25a312cad63fa0f4e3c46540755edf51d\" id:\"eeae803c2baf6318e0cfe8f24a6577d2bc553acc5ebfcfd73485e116bbbb39e4\" pid:5408 exited_at:{seconds:1762357939 nanos:528418814}" Nov 5 15:52:20.313373 kubelet[2780]: E1105 15:52:20.313332 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-74nvz" podUID="e61d1196-bf4a-4bdd-877c-9ea9a871d23c" Nov 5 15:52:20.314132 kubelet[2780]: E1105 15:52:20.314106 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwjt" podUID="3b4571ca-2142-4ad0-85e2-e8b00b2fb524" Nov 5 15:52:21.313462 kubelet[2780]: E1105 15:52:21.313416 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6644f9d4c6-px2b8" podUID="d8afeb1c-714d-4335-9a1d-a1135daaa2b3" Nov 5 15:52:22.314436 kubelet[2780]: E1105 15:52:22.314386 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fc44484bc-vfb78" podUID="e90854ce-cf6e-4a39-9e2a-1f06e654f065" Nov 5 15:52:22.744750 systemd[1]: Started sshd@27-172.239.60.160:22-139.178.89.65:41252.service - OpenSSH per-connection server daemon (139.178.89.65:41252). Nov 5 15:52:23.098572 sshd[5420]: Accepted publickey for core from 139.178.89.65 port 41252 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:52:23.100458 sshd-session[5420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:23.106993 systemd-logind[1580]: New session 28 of user core. Nov 5 15:52:23.115887 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 5 15:52:23.419049 sshd[5423]: Connection closed by 139.178.89.65 port 41252 Nov 5 15:52:23.419778 sshd-session[5420]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:23.424934 systemd[1]: sshd@27-172.239.60.160:22-139.178.89.65:41252.service: Deactivated successfully. Nov 5 15:52:23.427006 systemd[1]: session-28.scope: Deactivated successfully. Nov 5 15:52:23.428396 systemd-logind[1580]: Session 28 logged out. Waiting for processes to exit. Nov 5 15:52:23.430514 systemd-logind[1580]: Removed session 28. Nov 5 15:52:25.314661 kubelet[2780]: E1105 15:52:25.314584 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6slgh" podUID="175c15d8-2ca8-4a9b-b355-438a1e3fa9fd" Nov 5 15:52:25.315225 kubelet[2780]: E1105 15:52:25.314862 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d78f875fb-jwnjz" podUID="1ae08b84-bd20-47be-a4e3-39130515cbd3" Nov 5 15:52:28.485707 systemd[1]: Started sshd@28-172.239.60.160:22-139.178.89.65:50162.service - OpenSSH per-connection server daemon (139.178.89.65:50162). Nov 5 15:52:28.829007 sshd[5437]: Accepted publickey for core from 139.178.89.65 port 50162 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:52:28.830222 sshd-session[5437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:28.835568 systemd-logind[1580]: New session 29 of user core. Nov 5 15:52:28.842836 systemd[1]: Started session-29.scope - Session 29 of User core. Nov 5 15:52:29.172808 sshd[5440]: Connection closed by 139.178.89.65 port 50162 Nov 5 15:52:29.173457 sshd-session[5437]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:29.178708 systemd-logind[1580]: Session 29 logged out. Waiting for processes to exit. Nov 5 15:52:29.180351 systemd[1]: sshd@28-172.239.60.160:22-139.178.89.65:50162.service: Deactivated successfully. Nov 5 15:52:29.184569 systemd[1]: session-29.scope: Deactivated successfully. Nov 5 15:52:29.188334 systemd-logind[1580]: Removed session 29.