Jan 23 18:49:37.919990 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 16:02:29 -00 2026 Jan 23 18:49:37.920013 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:49:37.920022 kernel: BIOS-provided physical RAM map: Jan 23 18:49:37.920028 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Jan 23 18:49:37.920034 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Jan 23 18:49:37.920040 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 23 18:49:37.920049 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Jan 23 18:49:37.920055 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Jan 23 18:49:37.920061 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 23 18:49:37.920067 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 23 18:49:37.920073 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 18:49:37.920079 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 23 18:49:37.920085 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Jan 23 18:49:37.920091 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 18:49:37.920100 kernel: NX (Execute Disable) protection: active Jan 23 18:49:37.920106 kernel: APIC: Static calls initialized Jan 23 18:49:37.920112 kernel: SMBIOS 2.8 present. Jan 23 18:49:37.920119 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Jan 23 18:49:37.920125 kernel: DMI: Memory slots populated: 1/1 Jan 23 18:49:37.920131 kernel: Hypervisor detected: KVM Jan 23 18:49:37.920140 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jan 23 18:49:37.920146 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 18:49:37.920152 kernel: kvm-clock: using sched offset of 7224873264 cycles Jan 23 18:49:37.920158 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 18:49:37.920165 kernel: tsc: Detected 1999.999 MHz processor Jan 23 18:49:37.920172 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 18:49:37.920178 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 18:49:37.920185 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Jan 23 18:49:37.920191 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 23 18:49:37.920198 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 18:49:37.920207 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jan 23 18:49:37.920213 kernel: Using GB pages for direct mapping Jan 23 18:49:37.920219 kernel: ACPI: Early table checksum verification disabled Jan 23 18:49:37.920226 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Jan 23 18:49:37.920232 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:49:37.920239 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:49:37.920245 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:49:37.920252 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 23 18:49:37.920258 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:49:37.920267 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:49:37.920277 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:49:37.920283 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:49:37.920290 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Jan 23 18:49:37.920297 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Jan 23 18:49:37.920306 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 23 18:49:37.920312 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Jan 23 18:49:37.920319 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Jan 23 18:49:37.920326 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Jan 23 18:49:37.920332 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Jan 23 18:49:37.920339 kernel: No NUMA configuration found Jan 23 18:49:37.920346 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Jan 23 18:49:37.920352 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Jan 23 18:49:37.920359 kernel: Zone ranges: Jan 23 18:49:37.920368 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 18:49:37.920374 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 23 18:49:37.920381 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Jan 23 18:49:37.920388 kernel: Device empty Jan 23 18:49:37.920394 kernel: Movable zone start for each node Jan 23 18:49:37.920401 kernel: Early memory node ranges Jan 23 18:49:37.920408 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 23 18:49:37.920414 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Jan 23 18:49:37.920421 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Jan 23 18:49:37.920430 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Jan 23 18:49:37.920436 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 18:49:37.920443 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 23 18:49:37.920449 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jan 23 18:49:37.920456 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 18:49:37.920463 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 18:49:37.920470 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 18:49:37.920476 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 18:49:37.920483 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 18:49:37.920506 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 18:49:37.920513 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 18:49:37.920520 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 18:49:37.920527 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 18:49:37.920533 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 23 18:49:37.920540 kernel: TSC deadline timer available Jan 23 18:49:37.920547 kernel: CPU topo: Max. logical packages: 1 Jan 23 18:49:37.920553 kernel: CPU topo: Max. logical dies: 1 Jan 23 18:49:37.920560 kernel: CPU topo: Max. dies per package: 1 Jan 23 18:49:37.920566 kernel: CPU topo: Max. threads per core: 1 Jan 23 18:49:37.920576 kernel: CPU topo: Num. cores per package: 2 Jan 23 18:49:37.920582 kernel: CPU topo: Num. threads per package: 2 Jan 23 18:49:37.920589 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 23 18:49:37.920595 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 18:49:37.920602 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 23 18:49:37.920609 kernel: kvm-guest: setup PV sched yield Jan 23 18:49:37.920615 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 23 18:49:37.920622 kernel: Booting paravirtualized kernel on KVM Jan 23 18:49:37.920629 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 18:49:37.920638 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 23 18:49:37.920645 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 23 18:49:37.920652 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 23 18:49:37.920658 kernel: pcpu-alloc: [0] 0 1 Jan 23 18:49:37.920665 kernel: kvm-guest: PV spinlocks enabled Jan 23 18:49:37.920671 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 18:49:37.920679 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:49:37.920686 kernel: random: crng init done Jan 23 18:49:37.920695 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 18:49:37.920702 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 18:49:37.920708 kernel: Fallback order for Node 0: 0 Jan 23 18:49:37.920715 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Jan 23 18:49:37.920722 kernel: Policy zone: Normal Jan 23 18:49:37.920729 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 18:49:37.920735 kernel: software IO TLB: area num 2. Jan 23 18:49:37.920742 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 18:49:37.920749 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 18:49:37.920757 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 18:49:37.920764 kernel: Dynamic Preempt: voluntary Jan 23 18:49:37.920770 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 18:49:37.920778 kernel: rcu: RCU event tracing is enabled. Jan 23 18:49:37.920785 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 18:49:37.920792 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 18:49:37.920798 kernel: Rude variant of Tasks RCU enabled. Jan 23 18:49:37.920805 kernel: Tracing variant of Tasks RCU enabled. Jan 23 18:49:37.920812 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 18:49:37.920819 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 18:49:37.920828 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:49:37.920841 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:49:37.920851 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:49:37.920858 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 23 18:49:37.920865 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 18:49:37.920872 kernel: Console: colour VGA+ 80x25 Jan 23 18:49:37.920878 kernel: printk: legacy console [tty0] enabled Jan 23 18:49:37.920886 kernel: printk: legacy console [ttyS0] enabled Jan 23 18:49:37.920893 kernel: ACPI: Core revision 20240827 Jan 23 18:49:37.920902 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 23 18:49:37.920909 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 18:49:37.920916 kernel: x2apic enabled Jan 23 18:49:37.920923 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 18:49:37.920930 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 23 18:49:37.920937 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 23 18:49:37.920944 kernel: kvm-guest: setup PV IPIs Jan 23 18:49:37.920953 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 23 18:49:37.920960 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Jan 23 18:49:37.920967 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) Jan 23 18:49:37.920974 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 18:49:37.920981 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 23 18:49:37.920988 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 23 18:49:37.920995 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 18:49:37.921002 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 18:49:37.921009 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 18:49:37.921018 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 23 18:49:37.921025 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 23 18:49:37.921032 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 23 18:49:37.921040 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 23 18:49:37.921047 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 23 18:49:37.921054 kernel: active return thunk: srso_alias_return_thunk Jan 23 18:49:37.921061 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 23 18:49:37.921068 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 23 18:49:37.921077 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 18:49:37.921084 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 18:49:37.921091 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 18:49:37.921098 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 18:49:37.921105 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 23 18:49:37.921112 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 18:49:37.921119 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Jan 23 18:49:37.921126 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Jan 23 18:49:37.921133 kernel: Freeing SMP alternatives memory: 32K Jan 23 18:49:37.921142 kernel: pid_max: default: 32768 minimum: 301 Jan 23 18:49:37.921149 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 18:49:37.921156 kernel: landlock: Up and running. Jan 23 18:49:37.921163 kernel: SELinux: Initializing. Jan 23 18:49:37.921170 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 18:49:37.921177 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 18:49:37.921184 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 23 18:49:37.921191 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 23 18:49:37.921198 kernel: ... version: 0 Jan 23 18:49:37.921207 kernel: ... bit width: 48 Jan 23 18:49:37.921214 kernel: ... generic registers: 6 Jan 23 18:49:37.921221 kernel: ... value mask: 0000ffffffffffff Jan 23 18:49:37.921228 kernel: ... max period: 00007fffffffffff Jan 23 18:49:37.921234 kernel: ... fixed-purpose events: 0 Jan 23 18:49:37.921241 kernel: ... event mask: 000000000000003f Jan 23 18:49:37.921248 kernel: signal: max sigframe size: 3376 Jan 23 18:49:37.921255 kernel: rcu: Hierarchical SRCU implementation. Jan 23 18:49:37.921262 kernel: rcu: Max phase no-delay instances is 400. Jan 23 18:49:37.921271 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 18:49:37.921278 kernel: smp: Bringing up secondary CPUs ... Jan 23 18:49:37.921285 kernel: smpboot: x86: Booting SMP configuration: Jan 23 18:49:37.921292 kernel: .... node #0, CPUs: #1 Jan 23 18:49:37.921299 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 18:49:37.921306 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Jan 23 18:49:37.921313 kernel: Memory: 3952856K/4193772K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 235488K reserved, 0K cma-reserved) Jan 23 18:49:37.921320 kernel: devtmpfs: initialized Jan 23 18:49:37.921327 kernel: x86/mm: Memory block size: 128MB Jan 23 18:49:37.921337 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 18:49:37.921344 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 18:49:37.921351 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 18:49:37.921358 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 18:49:37.921364 kernel: audit: initializing netlink subsys (disabled) Jan 23 18:49:37.921372 kernel: audit: type=2000 audit(1769194175.308:1): state=initialized audit_enabled=0 res=1 Jan 23 18:49:37.921378 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 18:49:37.921385 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 18:49:37.921392 kernel: cpuidle: using governor menu Jan 23 18:49:37.921401 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 18:49:37.921408 kernel: dca service started, version 1.12.1 Jan 23 18:49:37.921415 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 23 18:49:37.921422 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 23 18:49:37.921429 kernel: PCI: Using configuration type 1 for base access Jan 23 18:49:37.921436 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 18:49:37.921443 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 18:49:37.921450 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 18:49:37.921457 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 18:49:37.921466 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 18:49:37.921473 kernel: ACPI: Added _OSI(Module Device) Jan 23 18:49:37.921480 kernel: ACPI: Added _OSI(Processor Device) Jan 23 18:49:37.921487 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 18:49:37.921504 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 18:49:37.921511 kernel: ACPI: Interpreter enabled Jan 23 18:49:37.921518 kernel: ACPI: PM: (supports S0 S3 S5) Jan 23 18:49:37.921525 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 18:49:37.921532 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 18:49:37.921541 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 18:49:37.921548 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 18:49:37.921555 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 18:49:37.921732 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 18:49:37.921890 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 23 18:49:37.922015 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 23 18:49:37.922025 kernel: PCI host bridge to bus 0000:00 Jan 23 18:49:37.922159 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 18:49:37.922302 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 18:49:37.922416 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 18:49:37.922951 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 23 18:49:37.923074 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 23 18:49:37.923187 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Jan 23 18:49:37.923299 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 18:49:37.923445 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 18:49:37.923607 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 23 18:49:37.923760 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 23 18:49:37.923888 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 23 18:49:37.924008 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 23 18:49:37.924127 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 18:49:37.924257 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Jan 23 18:49:37.924383 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Jan 23 18:49:37.924533 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 23 18:49:37.924660 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 23 18:49:37.924793 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 23 18:49:37.924916 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Jan 23 18:49:37.925036 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 23 18:49:37.925162 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 23 18:49:37.925282 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 23 18:49:37.925408 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 18:49:37.925560 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 18:49:37.925691 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 18:49:37.925812 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Jan 23 18:49:37.925932 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Jan 23 18:49:37.926066 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 18:49:37.926186 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 23 18:49:37.926196 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 18:49:37.926203 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 18:49:37.926210 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 18:49:37.926217 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 18:49:37.926224 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 18:49:37.926231 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 18:49:37.926242 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 18:49:37.926249 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 18:49:37.926256 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 18:49:37.926263 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 18:49:37.926269 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 18:49:37.926277 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 18:49:37.926284 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 18:49:37.926290 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 18:49:37.926297 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 18:49:37.926307 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 18:49:37.926314 kernel: iommu: Default domain type: Translated Jan 23 18:49:37.926321 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 18:49:37.926328 kernel: PCI: Using ACPI for IRQ routing Jan 23 18:49:37.926335 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 18:49:37.926342 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Jan 23 18:49:37.926349 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Jan 23 18:49:37.926468 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 18:49:37.926670 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 18:49:37.928544 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 18:49:37.928558 kernel: vgaarb: loaded Jan 23 18:49:37.928566 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 23 18:49:37.928574 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 23 18:49:37.928581 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 18:49:37.928588 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 18:49:37.928595 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 18:49:37.928603 kernel: pnp: PnP ACPI init Jan 23 18:49:37.928750 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 23 18:49:37.928786 kernel: pnp: PnP ACPI: found 5 devices Jan 23 18:49:37.928794 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 18:49:37.928801 kernel: NET: Registered PF_INET protocol family Jan 23 18:49:37.928808 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 18:49:37.928815 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 18:49:37.928822 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 18:49:37.928830 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 18:49:37.928840 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 18:49:37.928847 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 18:49:37.928854 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 18:49:37.928861 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 18:49:37.928868 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 18:49:37.928875 kernel: NET: Registered PF_XDP protocol family Jan 23 18:49:37.928994 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 18:49:37.929110 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 18:49:37.929222 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 18:49:37.929337 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 23 18:49:37.929448 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 23 18:49:37.929582 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Jan 23 18:49:37.929593 kernel: PCI: CLS 0 bytes, default 64 Jan 23 18:49:37.929600 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 23 18:49:37.929608 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Jan 23 18:49:37.929615 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Jan 23 18:49:37.929622 kernel: Initialise system trusted keyrings Jan 23 18:49:37.929632 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 18:49:37.929639 kernel: Key type asymmetric registered Jan 23 18:49:37.929646 kernel: Asymmetric key parser 'x509' registered Jan 23 18:49:37.929654 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 18:49:37.929661 kernel: io scheduler mq-deadline registered Jan 23 18:49:37.929668 kernel: io scheduler kyber registered Jan 23 18:49:37.929675 kernel: io scheduler bfq registered Jan 23 18:49:37.929682 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 18:49:37.929689 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 18:49:37.929699 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 18:49:37.929706 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 18:49:37.929713 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 18:49:37.929720 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 18:49:37.929727 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 18:49:37.929734 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 18:49:37.929741 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 18:49:37.929874 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 23 18:49:37.929992 kernel: rtc_cmos 00:03: registered as rtc0 Jan 23 18:49:37.930112 kernel: rtc_cmos 00:03: setting system clock to 2026-01-23T18:49:37 UTC (1769194177) Jan 23 18:49:37.930226 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 23 18:49:37.930235 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 23 18:49:37.930242 kernel: NET: Registered PF_INET6 protocol family Jan 23 18:49:37.930249 kernel: Segment Routing with IPv6 Jan 23 18:49:37.930256 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 18:49:37.930263 kernel: NET: Registered PF_PACKET protocol family Jan 23 18:49:37.930270 kernel: Key type dns_resolver registered Jan 23 18:49:37.930280 kernel: IPI shorthand broadcast: enabled Jan 23 18:49:37.930288 kernel: sched_clock: Marking stable (2856004578, 362598867)->(3319457128, -100853683) Jan 23 18:49:37.930295 kernel: registered taskstats version 1 Jan 23 18:49:37.930302 kernel: Loading compiled-in X.509 certificates Jan 23 18:49:37.930309 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 2aec04a968f0111235eb989789145bc2b989f0c6' Jan 23 18:49:37.930316 kernel: Demotion targets for Node 0: null Jan 23 18:49:37.930323 kernel: Key type .fscrypt registered Jan 23 18:49:37.930330 kernel: Key type fscrypt-provisioning registered Jan 23 18:49:37.930337 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 18:49:37.930346 kernel: ima: Allocated hash algorithm: sha1 Jan 23 18:49:37.930353 kernel: ima: No architecture policies found Jan 23 18:49:37.930360 kernel: clk: Disabling unused clocks Jan 23 18:49:37.930367 kernel: Warning: unable to open an initial console. Jan 23 18:49:37.930374 kernel: Freeing unused kernel image (initmem) memory: 46200K Jan 23 18:49:37.930382 kernel: Write protecting the kernel read-only data: 40960k Jan 23 18:49:37.930389 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 18:49:37.930396 kernel: Run /init as init process Jan 23 18:49:37.930403 kernel: with arguments: Jan 23 18:49:37.930412 kernel: /init Jan 23 18:49:37.930419 kernel: with environment: Jan 23 18:49:37.930440 kernel: HOME=/ Jan 23 18:49:37.930449 kernel: TERM=linux Jan 23 18:49:37.930458 systemd[1]: Successfully made /usr/ read-only. Jan 23 18:49:37.930468 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:49:37.930476 systemd[1]: Detected virtualization kvm. Jan 23 18:49:37.930485 systemd[1]: Detected architecture x86-64. Jan 23 18:49:37.932149 systemd[1]: Running in initrd. Jan 23 18:49:37.932161 systemd[1]: No hostname configured, using default hostname. Jan 23 18:49:37.932170 systemd[1]: Hostname set to . Jan 23 18:49:37.932178 systemd[1]: Initializing machine ID from random generator. Jan 23 18:49:37.932186 systemd[1]: Queued start job for default target initrd.target. Jan 23 18:49:37.932194 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:49:37.932202 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:49:37.932214 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 18:49:37.932222 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:49:37.932230 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 18:49:37.932239 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 18:49:37.932248 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 18:49:37.932256 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 18:49:37.932263 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:49:37.932273 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:49:37.932281 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:49:37.932288 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:49:37.932296 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:49:37.932304 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:49:37.932311 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:49:37.932319 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:49:37.932327 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 18:49:37.932334 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 18:49:37.932344 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:49:37.932352 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:49:37.932362 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:49:37.932370 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:49:37.932377 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 18:49:37.932387 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:49:37.932395 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 18:49:37.932403 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 18:49:37.932411 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 18:49:37.932418 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:49:37.932428 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:49:37.932436 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:49:37.932470 systemd-journald[187]: Collecting audit messages is disabled. Jan 23 18:49:37.932506 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 18:49:37.932515 systemd-journald[187]: Journal started Jan 23 18:49:37.932538 systemd-journald[187]: Runtime Journal (/run/log/journal/29dc3f81895846a9833315ae71d848e7) is 8M, max 78.2M, 70.2M free. Jan 23 18:49:37.916170 systemd-modules-load[188]: Inserted module 'overlay' Jan 23 18:49:37.937578 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:49:37.943543 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:49:38.035747 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 18:49:38.035770 kernel: Bridge firewalling registered Jan 23 18:49:37.945134 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 18:49:37.949307 systemd-modules-load[188]: Inserted module 'br_netfilter' Jan 23 18:49:38.035133 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:49:38.059174 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:49:38.063972 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 18:49:38.067613 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:49:38.077608 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 18:49:38.081034 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:49:38.091279 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:49:38.099282 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:49:38.101054 systemd-tmpfiles[206]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 18:49:38.103614 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 18:49:38.106768 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 18:49:38.109133 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:49:38.117155 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:49:38.120635 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:49:38.131571 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:49:38.144722 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:49:38.173948 systemd-resolved[225]: Positive Trust Anchors: Jan 23 18:49:38.173961 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:49:38.173987 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:49:38.180625 systemd-resolved[225]: Defaulting to hostname 'linux'. Jan 23 18:49:38.181744 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:49:38.182902 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:49:38.227528 kernel: SCSI subsystem initialized Jan 23 18:49:38.239527 kernel: Loading iSCSI transport class v2.0-870. Jan 23 18:49:38.250522 kernel: iscsi: registered transport (tcp) Jan 23 18:49:38.271466 kernel: iscsi: registered transport (qla4xxx) Jan 23 18:49:38.271536 kernel: QLogic iSCSI HBA Driver Jan 23 18:49:38.292421 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:49:38.311727 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:49:38.312856 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:49:38.364367 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 18:49:38.367331 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 18:49:38.428527 kernel: raid6: avx2x4 gen() 33187 MB/s Jan 23 18:49:38.445519 kernel: raid6: avx2x2 gen() 32544 MB/s Jan 23 18:49:38.463792 kernel: raid6: avx2x1 gen() 20775 MB/s Jan 23 18:49:38.463848 kernel: raid6: using algorithm avx2x4 gen() 33187 MB/s Jan 23 18:49:38.485445 kernel: raid6: .... xor() 4337 MB/s, rmw enabled Jan 23 18:49:38.485475 kernel: raid6: using avx2x2 recovery algorithm Jan 23 18:49:38.506530 kernel: xor: automatically using best checksumming function avx Jan 23 18:49:38.645531 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 18:49:38.652687 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:49:38.655033 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:49:38.683903 systemd-udevd[437]: Using default interface naming scheme 'v255'. Jan 23 18:49:38.689735 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:49:38.693187 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 18:49:38.716704 dracut-pre-trigger[444]: rd.md=0: removing MD RAID activation Jan 23 18:49:38.742676 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:49:38.745713 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:49:38.823455 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:49:38.827593 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 18:49:38.879543 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Jan 23 18:49:38.888515 kernel: scsi host0: Virtio SCSI HBA Jan 23 18:49:38.909515 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 18:49:38.918597 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 23 18:49:38.920550 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 18:49:38.950988 kernel: libata version 3.00 loaded. Jan 23 18:49:38.950017 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:49:38.950130 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:49:38.953394 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:49:38.957667 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:49:38.959782 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:49:38.966514 kernel: AES CTR mode by8 optimization enabled Jan 23 18:49:39.095524 kernel: sd 0:0:0:0: Power-on or device reset occurred Jan 23 18:49:39.095805 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 18:49:39.095968 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 18:49:39.100048 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Jan 23 18:49:39.103670 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 23 18:49:39.103850 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 18:49:39.104002 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 23 18:49:39.104154 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 18:49:39.107510 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 23 18:49:39.107688 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 18:49:39.119518 kernel: scsi host1: ahci Jan 23 18:49:39.154968 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 18:49:39.155018 kernel: GPT:9289727 != 167739391 Jan 23 18:49:39.155030 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 18:49:39.155040 kernel: GPT:9289727 != 167739391 Jan 23 18:49:39.155049 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 18:49:39.155059 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 18:49:39.157527 kernel: scsi host2: ahci Jan 23 18:49:39.159697 kernel: scsi host3: ahci Jan 23 18:49:39.159900 kernel: scsi host4: ahci Jan 23 18:49:39.162065 kernel: scsi host5: ahci Jan 23 18:49:39.165542 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 23 18:49:39.165741 kernel: scsi host6: ahci Jan 23 18:49:39.181786 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 44 lpm-pol 1 Jan 23 18:49:39.181822 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 44 lpm-pol 1 Jan 23 18:49:39.185007 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 44 lpm-pol 1 Jan 23 18:49:39.188178 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 44 lpm-pol 1 Jan 23 18:49:39.191519 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 44 lpm-pol 1 Jan 23 18:49:39.191578 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 44 lpm-pol 1 Jan 23 18:49:39.261732 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 23 18:49:39.338591 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:49:39.347790 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 23 18:49:39.348652 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 23 18:49:39.359598 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 23 18:49:39.369447 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 23 18:49:39.378422 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 18:49:39.395029 disk-uuid[607]: Primary Header is updated. Jan 23 18:49:39.395029 disk-uuid[607]: Secondary Entries is updated. Jan 23 18:49:39.395029 disk-uuid[607]: Secondary Header is updated. Jan 23 18:49:39.406539 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 18:49:39.420524 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 18:49:39.513520 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 23 18:49:39.513550 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 18:49:39.513561 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 23 18:49:39.513572 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 18:49:39.513587 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 18:49:39.513597 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 18:49:39.592825 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 18:49:39.612206 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:49:39.613331 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:49:39.615189 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:49:39.619797 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 18:49:39.661741 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:49:40.427569 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 18:49:40.428095 disk-uuid[608]: The operation has completed successfully. Jan 23 18:49:40.501710 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 18:49:40.501868 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 18:49:40.534459 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 18:49:40.556466 sh[635]: Success Jan 23 18:49:40.578867 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 18:49:40.578912 kernel: device-mapper: uevent: version 1.0.3 Jan 23 18:49:40.584530 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 18:49:40.596538 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 23 18:49:40.645327 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 18:49:40.651585 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 18:49:40.662268 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 18:49:40.675530 kernel: BTRFS: device fsid 4711e7dc-9586-49d4-8dcc-466f082e7841 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (647) Jan 23 18:49:40.675566 kernel: BTRFS info (device dm-0): first mount of filesystem 4711e7dc-9586-49d4-8dcc-466f082e7841 Jan 23 18:49:40.678898 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:49:40.692828 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 18:49:40.692863 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 18:49:40.692888 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 18:49:40.697209 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 18:49:40.698638 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:49:40.699977 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 18:49:40.700948 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 18:49:40.705638 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 18:49:40.736809 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (682) Jan 23 18:49:40.744793 kernel: BTRFS info (device sda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:49:40.744827 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:49:40.756245 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 18:49:40.756274 kernel: BTRFS info (device sda6): turning on async discard Jan 23 18:49:40.756286 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 18:49:40.764515 kernel: BTRFS info (device sda6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:49:40.766293 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 18:49:40.769640 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 18:49:40.845966 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:49:40.852095 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:49:40.908387 ignition[753]: Ignition 2.22.0 Jan 23 18:49:40.908402 ignition[753]: Stage: fetch-offline Jan 23 18:49:40.908446 ignition[753]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:49:40.908462 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 18:49:40.909791 ignition[753]: parsed url from cmdline: "" Jan 23 18:49:40.913858 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:49:40.909798 ignition[753]: no config URL provided Jan 23 18:49:40.909806 ignition[753]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 18:49:40.909819 ignition[753]: no config at "/usr/lib/ignition/user.ign" Jan 23 18:49:40.909827 ignition[753]: failed to fetch config: resource requires networking Jan 23 18:49:40.910182 ignition[753]: Ignition finished successfully Jan 23 18:49:40.922383 systemd-networkd[816]: lo: Link UP Jan 23 18:49:40.922397 systemd-networkd[816]: lo: Gained carrier Jan 23 18:49:40.924784 systemd-networkd[816]: Enumeration completed Jan 23 18:49:40.924882 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:49:40.926237 systemd-networkd[816]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:49:40.926242 systemd-networkd[816]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:49:40.927264 systemd[1]: Reached target network.target - Network. Jan 23 18:49:40.928908 systemd-networkd[816]: eth0: Link UP Jan 23 18:49:40.929072 systemd-networkd[816]: eth0: Gained carrier Jan 23 18:49:40.929084 systemd-networkd[816]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:49:40.931612 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 18:49:40.964947 ignition[825]: Ignition 2.22.0 Jan 23 18:49:40.965546 ignition[825]: Stage: fetch Jan 23 18:49:40.965712 ignition[825]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:49:40.965729 ignition[825]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 18:49:40.965837 ignition[825]: parsed url from cmdline: "" Jan 23 18:49:40.965844 ignition[825]: no config URL provided Jan 23 18:49:40.965853 ignition[825]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 18:49:40.965865 ignition[825]: no config at "/usr/lib/ignition/user.ign" Jan 23 18:49:40.965894 ignition[825]: PUT http://169.254.169.254/v1/token: attempt #1 Jan 23 18:49:40.966071 ignition[825]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 23 18:49:41.166261 ignition[825]: PUT http://169.254.169.254/v1/token: attempt #2 Jan 23 18:49:41.167098 ignition[825]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 23 18:49:41.567952 ignition[825]: PUT http://169.254.169.254/v1/token: attempt #3 Jan 23 18:49:41.568126 ignition[825]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 23 18:49:41.818592 systemd-networkd[816]: eth0: DHCPv4 address 172.239.197.220/24, gateway 172.239.197.1 acquired from 23.40.197.134 Jan 23 18:49:42.368306 ignition[825]: PUT http://169.254.169.254/v1/token: attempt #4 Jan 23 18:49:42.463758 ignition[825]: PUT result: OK Jan 23 18:49:42.463838 ignition[825]: GET http://169.254.169.254/v1/user-data: attempt #1 Jan 23 18:49:42.555060 systemd-networkd[816]: eth0: Gained IPv6LL Jan 23 18:49:42.571764 ignition[825]: GET result: OK Jan 23 18:49:42.572386 ignition[825]: parsing config with SHA512: d86a7d8d9e23400340eae61fc649bd48ef493cb748f40335ef1cda9a6aaa04d41d9898c2d851b2f6e9cf388285f0da6fc0b1635237b2b326c796451e3dbcf429 Jan 23 18:49:42.579454 unknown[825]: fetched base config from "system" Jan 23 18:49:42.579468 unknown[825]: fetched base config from "system" Jan 23 18:49:42.579878 ignition[825]: fetch: fetch complete Jan 23 18:49:42.579477 unknown[825]: fetched user config from "akamai" Jan 23 18:49:42.579886 ignition[825]: fetch: fetch passed Jan 23 18:49:42.585591 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 18:49:42.579938 ignition[825]: Ignition finished successfully Jan 23 18:49:42.609879 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 18:49:42.649533 ignition[832]: Ignition 2.22.0 Jan 23 18:49:42.649549 ignition[832]: Stage: kargs Jan 23 18:49:42.649663 ignition[832]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:49:42.649674 ignition[832]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 18:49:42.653438 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 18:49:42.650463 ignition[832]: kargs: kargs passed Jan 23 18:49:42.650544 ignition[832]: Ignition finished successfully Jan 23 18:49:42.658127 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 18:49:42.706542 ignition[838]: Ignition 2.22.0 Jan 23 18:49:42.706563 ignition[838]: Stage: disks Jan 23 18:49:42.706755 ignition[838]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:49:42.710868 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 18:49:42.706772 ignition[838]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 18:49:42.712315 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 18:49:42.708057 ignition[838]: disks: disks passed Jan 23 18:49:42.713986 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 18:49:42.708113 ignition[838]: Ignition finished successfully Jan 23 18:49:42.715812 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:49:42.717466 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:49:42.719485 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:49:42.723639 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 18:49:42.759788 systemd-fsck[846]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 23 18:49:42.763604 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 18:49:42.767140 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 18:49:42.895526 kernel: EXT4-fs (sda9): mounted filesystem dcb97a38-a4f5-43e7-bcb0-85a5c1e2a446 r/w with ordered data mode. Quota mode: none. Jan 23 18:49:42.895591 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 18:49:42.896979 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 18:49:42.900147 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:49:42.903563 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 18:49:42.905699 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 18:49:42.905750 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 18:49:42.905775 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:49:42.920895 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 18:49:42.937881 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (854) Jan 23 18:49:42.937907 kernel: BTRFS info (device sda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:49:42.937919 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:49:42.937929 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 18:49:42.937940 kernel: BTRFS info (device sda6): turning on async discard Jan 23 18:49:42.937949 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 18:49:42.944056 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:49:42.947124 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 18:49:43.010547 initrd-setup-root[878]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 18:49:43.017218 initrd-setup-root[885]: cut: /sysroot/etc/group: No such file or directory Jan 23 18:49:43.022401 initrd-setup-root[892]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 18:49:43.027555 initrd-setup-root[899]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 18:49:43.144475 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 18:49:43.147169 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 18:49:43.149341 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 18:49:43.171049 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 18:49:43.175001 kernel: BTRFS info (device sda6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:49:43.191550 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 18:49:43.210540 ignition[968]: INFO : Ignition 2.22.0 Jan 23 18:49:43.210540 ignition[968]: INFO : Stage: mount Jan 23 18:49:43.212717 ignition[968]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:49:43.212717 ignition[968]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 18:49:43.212717 ignition[968]: INFO : mount: mount passed Jan 23 18:49:43.212717 ignition[968]: INFO : Ignition finished successfully Jan 23 18:49:43.215343 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 18:49:43.220267 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 18:49:43.898158 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:49:43.929538 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (979) Jan 23 18:49:43.929615 kernel: BTRFS info (device sda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:49:43.933700 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:49:43.945288 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 18:49:43.945345 kernel: BTRFS info (device sda6): turning on async discard Jan 23 18:49:43.945372 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 18:49:43.950911 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:49:43.993158 ignition[995]: INFO : Ignition 2.22.0 Jan 23 18:49:43.993158 ignition[995]: INFO : Stage: files Jan 23 18:49:43.995366 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:49:43.995366 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 18:49:43.995366 ignition[995]: DEBUG : files: compiled without relabeling support, skipping Jan 23 18:49:43.999149 ignition[995]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 18:49:43.999149 ignition[995]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 18:49:44.001548 ignition[995]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 18:49:44.001548 ignition[995]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 18:49:44.001548 ignition[995]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 18:49:44.001197 unknown[995]: wrote ssh authorized keys file for user: core Jan 23 18:49:44.005829 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 18:49:44.005829 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 23 18:49:44.301040 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 18:49:44.472060 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 18:49:44.473650 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 18:49:44.475130 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 18:49:44.475130 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 18:49:44.475130 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 18:49:44.475130 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 18:49:44.475130 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 18:49:44.475130 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 18:49:44.475130 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 18:49:44.484893 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:49:44.484893 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:49:44.484893 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 18:49:44.484893 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 18:49:44.484893 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 18:49:44.484893 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 23 18:49:45.019436 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 18:49:45.462179 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 18:49:45.462179 ignition[995]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 18:49:45.465095 ignition[995]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 18:49:45.467234 ignition[995]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 18:49:45.467234 ignition[995]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 18:49:45.467234 ignition[995]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 23 18:49:45.467234 ignition[995]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 23 18:49:45.473311 ignition[995]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 23 18:49:45.473311 ignition[995]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 23 18:49:45.473311 ignition[995]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 23 18:49:45.473311 ignition[995]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 18:49:45.473311 ignition[995]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:49:45.473311 ignition[995]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:49:45.473311 ignition[995]: INFO : files: files passed Jan 23 18:49:45.473311 ignition[995]: INFO : Ignition finished successfully Jan 23 18:49:45.475813 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 18:49:45.477768 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 18:49:45.487817 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 18:49:45.494939 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 18:49:45.496135 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 18:49:45.517138 initrd-setup-root-after-ignition[1026]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:49:45.517138 initrd-setup-root-after-ignition[1026]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:49:45.520770 initrd-setup-root-after-ignition[1030]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:49:45.521433 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:49:45.523639 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 18:49:45.525934 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 18:49:45.594917 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 18:49:45.595115 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 18:49:45.597795 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 18:49:45.599910 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 18:49:45.600979 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 18:49:45.602675 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 18:49:45.632411 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:49:45.636720 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 18:49:45.654958 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:49:45.656144 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:49:45.658053 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 18:49:45.659794 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 18:49:45.660028 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:49:45.661874 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 18:49:45.663145 systemd[1]: Stopped target basic.target - Basic System. Jan 23 18:49:45.664879 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 18:49:45.666549 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:49:45.668295 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 18:49:45.669979 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:49:45.671910 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 18:49:45.673815 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:49:45.675451 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 18:49:45.677325 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 18:49:45.679089 systemd[1]: Stopped target swap.target - Swaps. Jan 23 18:49:45.680813 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 18:49:45.681055 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:49:45.682763 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:49:45.683998 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:49:45.685582 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 18:49:45.685766 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:49:45.687399 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 18:49:45.687650 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 18:49:45.689983 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 18:49:45.690152 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:49:45.691987 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 18:49:45.692128 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 18:49:45.696685 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 18:49:45.698066 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 18:49:45.699615 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:49:45.703535 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 18:49:45.705699 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 18:49:45.706322 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:49:45.707345 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 18:49:45.707567 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:49:45.719718 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 18:49:45.743607 ignition[1050]: INFO : Ignition 2.22.0 Jan 23 18:49:45.743607 ignition[1050]: INFO : Stage: umount Jan 23 18:49:45.743607 ignition[1050]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:49:45.743607 ignition[1050]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Jan 23 18:49:45.743607 ignition[1050]: INFO : umount: umount passed Jan 23 18:49:45.743607 ignition[1050]: INFO : Ignition finished successfully Jan 23 18:49:45.719849 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 18:49:45.749322 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 18:49:45.749441 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 18:49:45.756236 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 18:49:45.757445 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 18:49:45.757586 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 18:49:45.759257 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 18:49:45.759331 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 18:49:45.761641 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 18:49:45.761690 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 18:49:45.762701 systemd[1]: Stopped target network.target - Network. Jan 23 18:49:45.763987 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 18:49:45.764077 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:49:45.765522 systemd[1]: Stopped target paths.target - Path Units. Jan 23 18:49:45.766922 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 18:49:45.770546 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:49:45.771944 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 18:49:45.773681 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 18:49:45.775195 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 18:49:45.775245 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:49:45.776693 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 18:49:45.776743 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:49:45.778135 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 18:49:45.778209 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 18:49:45.779754 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 18:49:45.779860 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 18:49:45.781592 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 18:49:45.783156 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 18:49:45.785293 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 18:49:45.785464 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 18:49:45.789694 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 18:49:45.789829 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 18:49:45.794922 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 18:49:45.796631 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 18:49:45.796768 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 18:49:45.799626 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 18:49:45.800345 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 18:49:45.802006 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 18:49:45.802053 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:49:45.803888 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 18:49:45.803950 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 18:49:45.806630 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 18:49:45.808396 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 18:49:45.808452 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:49:45.812662 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 18:49:45.812716 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:49:45.814164 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 18:49:45.814219 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 18:49:45.815372 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 18:49:45.815428 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:49:45.816666 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:49:45.823273 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 18:49:45.823343 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:49:45.838636 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 18:49:45.838888 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:49:45.841848 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 18:49:45.841985 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 18:49:45.844769 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 18:49:45.844854 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 18:49:45.846578 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 18:49:45.846622 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:49:45.848083 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 18:49:45.848141 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:49:45.850518 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 18:49:45.850594 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 18:49:45.852102 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 18:49:45.852172 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:49:45.854792 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 18:49:45.858700 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 18:49:45.858789 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:49:45.860615 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 18:49:45.860673 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:49:45.863386 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:49:45.863438 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:49:45.869443 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 18:49:45.869604 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 18:49:45.869702 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:49:45.878084 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 18:49:45.878266 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 18:49:45.880312 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 18:49:45.882859 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 18:49:45.916376 systemd[1]: Switching root. Jan 23 18:49:45.952845 systemd-journald[187]: Journal stopped Jan 23 18:49:47.190399 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Jan 23 18:49:47.190428 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 18:49:47.190440 kernel: SELinux: policy capability open_perms=1 Jan 23 18:49:47.190450 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 18:49:47.190458 kernel: SELinux: policy capability always_check_network=0 Jan 23 18:49:47.190470 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 18:49:47.190479 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 18:49:47.190489 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 18:49:47.190756 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 18:49:47.190769 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 18:49:47.190779 kernel: audit: type=1403 audit(1769194186.149:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 18:49:47.190790 systemd[1]: Successfully loaded SELinux policy in 98.060ms. Jan 23 18:49:47.193611 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.981ms. Jan 23 18:49:47.193634 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:49:47.193646 systemd[1]: Detected virtualization kvm. Jan 23 18:49:47.193657 systemd[1]: Detected architecture x86-64. Jan 23 18:49:47.193670 systemd[1]: Detected first boot. Jan 23 18:49:47.193681 systemd[1]: Initializing machine ID from random generator. Jan 23 18:49:47.193691 kernel: Guest personality initialized and is inactive Jan 23 18:49:47.193701 zram_generator::config[1094]: No configuration found. Jan 23 18:49:47.193711 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 18:49:47.193721 kernel: Initialized host personality Jan 23 18:49:47.193730 kernel: NET: Registered PF_VSOCK protocol family Jan 23 18:49:47.193740 systemd[1]: Populated /etc with preset unit settings. Jan 23 18:49:47.193753 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 18:49:47.193763 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 18:49:47.193773 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 18:49:47.193783 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 18:49:47.193793 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 18:49:47.193803 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 18:49:47.193813 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 18:49:47.193825 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 18:49:47.193835 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 18:49:47.193845 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 18:49:47.193856 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 18:49:47.193866 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 18:49:47.193877 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:49:47.193888 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:49:47.193898 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 18:49:47.193911 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 18:49:47.193924 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 18:49:47.193934 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:49:47.193945 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 18:49:47.193955 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:49:47.193965 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:49:47.193976 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 18:49:47.193988 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 18:49:47.193999 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 18:49:47.194009 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 18:49:47.194019 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:49:47.194029 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:49:47.194039 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:49:47.194049 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:49:47.194059 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 18:49:47.194069 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 18:49:47.194082 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 18:49:47.194093 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:49:47.194103 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:49:47.194114 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:49:47.194126 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 18:49:47.194136 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 18:49:47.194147 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 18:49:47.194157 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 18:49:47.194167 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:49:47.194177 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 18:49:47.194188 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 18:49:47.194198 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 18:49:47.194210 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 18:49:47.194221 systemd[1]: Reached target machines.target - Containers. Jan 23 18:49:47.194231 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 18:49:47.194242 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:49:47.194252 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:49:47.194262 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 18:49:47.194272 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:49:47.194283 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:49:47.194293 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:49:47.194305 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 18:49:47.194316 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:49:47.194326 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 18:49:47.194337 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 18:49:47.194347 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 18:49:47.194358 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 18:49:47.194368 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 18:49:47.194378 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:49:47.194391 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:49:47.194401 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:49:47.194411 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:49:47.194421 kernel: loop: module loaded Jan 23 18:49:47.194431 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 18:49:47.194441 kernel: ACPI: bus type drm_connector registered Jan 23 18:49:47.194451 kernel: fuse: init (API version 7.41) Jan 23 18:49:47.194461 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 18:49:47.194473 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:49:47.194483 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 18:49:47.195554 systemd[1]: Stopped verity-setup.service. Jan 23 18:49:47.195575 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:49:47.195614 systemd-journald[1175]: Collecting audit messages is disabled. Jan 23 18:49:47.195640 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 18:49:47.195651 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 18:49:47.195662 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 18:49:47.195673 systemd-journald[1175]: Journal started Jan 23 18:49:47.195692 systemd-journald[1175]: Runtime Journal (/run/log/journal/c17b8ac179c840f786db6428a1edf385) is 8M, max 78.2M, 70.2M free. Jan 23 18:49:46.790988 systemd[1]: Queued start job for default target multi-user.target. Jan 23 18:49:46.804344 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 23 18:49:46.804862 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 18:49:47.203189 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:49:47.204091 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 18:49:47.205236 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 18:49:47.206340 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 18:49:47.207635 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 18:49:47.208751 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:49:47.209920 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 18:49:47.210131 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 18:49:47.211445 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:49:47.211910 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:49:47.213095 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:49:47.213364 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:49:47.214821 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:49:47.215133 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:49:47.216334 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 18:49:47.216821 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 18:49:47.218163 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:49:47.218652 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:49:47.219873 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:49:47.221008 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:49:47.222243 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 18:49:47.223361 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 18:49:47.237923 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:49:47.242606 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 18:49:47.251731 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 18:49:47.255639 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 18:49:47.255737 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:49:47.257594 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 18:49:47.271242 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 18:49:47.272147 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:49:47.274266 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 18:49:47.277605 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 18:49:47.279608 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:49:47.281683 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 18:49:47.284276 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:49:47.286639 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:49:47.291190 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 18:49:47.298040 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 18:49:47.302929 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 18:49:47.303784 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 18:49:47.317437 systemd-journald[1175]: Time spent on flushing to /var/log/journal/c17b8ac179c840f786db6428a1edf385 is 30.126ms for 1007 entries. Jan 23 18:49:47.317437 systemd-journald[1175]: System Journal (/var/log/journal/c17b8ac179c840f786db6428a1edf385) is 8M, max 195.6M, 187.6M free. Jan 23 18:49:47.387220 systemd-journald[1175]: Received client request to flush runtime journal. Jan 23 18:49:47.387273 kernel: loop0: detected capacity change from 0 to 229808 Jan 23 18:49:47.341448 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 18:49:47.342371 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 18:49:47.348972 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 18:49:47.358346 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:49:47.387168 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:49:47.395175 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 18:49:47.401854 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 18:49:47.410755 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 18:49:47.423195 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 18:49:47.426650 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:49:47.434323 kernel: loop1: detected capacity change from 0 to 8 Jan 23 18:49:47.455629 kernel: loop2: detected capacity change from 0 to 110984 Jan 23 18:49:47.490972 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Jan 23 18:49:47.491371 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Jan 23 18:49:47.501748 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:49:47.508551 kernel: loop3: detected capacity change from 0 to 128560 Jan 23 18:49:47.537538 kernel: loop4: detected capacity change from 0 to 229808 Jan 23 18:49:47.560511 kernel: loop5: detected capacity change from 0 to 8 Jan 23 18:49:47.565517 kernel: loop6: detected capacity change from 0 to 110984 Jan 23 18:49:47.579516 kernel: loop7: detected capacity change from 0 to 128560 Jan 23 18:49:47.596614 (sd-merge)[1247]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Jan 23 18:49:47.599213 (sd-merge)[1247]: Merged extensions into '/usr'. Jan 23 18:49:47.608082 systemd[1]: Reload requested from client PID 1220 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 18:49:47.608100 systemd[1]: Reloading... Jan 23 18:49:47.729603 zram_generator::config[1273]: No configuration found. Jan 23 18:49:47.827063 ldconfig[1215]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 18:49:47.961175 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 18:49:47.961692 systemd[1]: Reloading finished in 352 ms. Jan 23 18:49:47.990716 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 18:49:47.991865 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 18:49:47.993004 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 18:49:48.003823 systemd[1]: Starting ensure-sysext.service... Jan 23 18:49:48.007607 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:49:48.009680 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:49:48.027160 systemd[1]: Reload requested from client PID 1317 ('systemctl') (unit ensure-sysext.service)... Jan 23 18:49:48.027179 systemd[1]: Reloading... Jan 23 18:49:48.044200 systemd-udevd[1319]: Using default interface naming scheme 'v255'. Jan 23 18:49:48.050630 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 18:49:48.051401 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 18:49:48.051752 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 18:49:48.052019 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 18:49:48.052915 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 18:49:48.053166 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Jan 23 18:49:48.053241 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Jan 23 18:49:48.063350 systemd-tmpfiles[1318]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:49:48.063365 systemd-tmpfiles[1318]: Skipping /boot Jan 23 18:49:48.079405 systemd-tmpfiles[1318]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:49:48.079418 systemd-tmpfiles[1318]: Skipping /boot Jan 23 18:49:48.167563 zram_generator::config[1374]: No configuration found. Jan 23 18:49:48.359530 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 23 18:49:48.379520 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 18:49:48.395534 kernel: ACPI: button: Power Button [PWRF] Jan 23 18:49:48.416523 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 18:49:48.420517 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 18:49:48.430527 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 18:49:48.431040 systemd[1]: Reloading finished in 403 ms. Jan 23 18:49:48.443583 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:49:48.446066 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:49:48.480729 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 18:49:48.486714 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 18:49:48.491085 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 18:49:48.499443 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:49:48.504755 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:49:48.508664 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 18:49:48.517036 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:49:48.517700 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:49:48.522746 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:49:48.525943 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:49:48.535571 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:49:48.537649 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:49:48.537748 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:49:48.537831 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:49:48.544768 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:49:48.545193 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:49:48.545343 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:49:48.545624 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:49:48.550850 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 18:49:48.552571 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:49:48.560107 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:49:48.560321 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:49:48.563354 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:49:48.564348 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:49:48.564425 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:49:48.564502 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:49:48.565544 systemd[1]: Finished ensure-sysext.service. Jan 23 18:49:48.574913 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 18:49:48.584197 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 18:49:48.622181 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 18:49:48.624028 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:49:48.624716 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:49:48.626070 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:49:48.626765 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:49:48.627826 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:49:48.628628 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:49:48.630521 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:49:48.631012 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:49:48.637956 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:49:48.638107 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:49:48.641014 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 18:49:48.670082 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 18:49:48.672323 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 18:49:48.673860 augenrules[1479]: No rules Jan 23 18:49:48.676959 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 18:49:48.677260 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 18:49:48.686975 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:49:48.689787 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 18:49:48.702795 kernel: EDAC MC: Ver: 3.0.0 Jan 23 18:49:48.714783 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 18:49:48.781731 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 23 18:49:48.785616 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 18:49:48.818871 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 18:49:48.926664 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:49:48.991646 systemd-networkd[1442]: lo: Link UP Jan 23 18:49:48.991658 systemd-networkd[1442]: lo: Gained carrier Jan 23 18:49:48.993363 systemd-networkd[1442]: Enumeration completed Jan 23 18:49:48.993452 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:49:48.994586 systemd-networkd[1442]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:49:48.994595 systemd-networkd[1442]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:49:48.996595 systemd-networkd[1442]: eth0: Link UP Jan 23 18:49:48.996786 systemd-networkd[1442]: eth0: Gained carrier Jan 23 18:49:48.996807 systemd-networkd[1442]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:49:48.997273 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 18:49:49.003633 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 18:49:49.008716 systemd-resolved[1443]: Positive Trust Anchors: Jan 23 18:49:49.009624 systemd-resolved[1443]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:49:49.009697 systemd-resolved[1443]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:49:49.013205 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 18:49:49.014421 systemd-resolved[1443]: Defaulting to hostname 'linux'. Jan 23 18:49:49.014631 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 18:49:49.016072 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:49:49.016984 systemd[1]: Reached target network.target - Network. Jan 23 18:49:49.017757 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:49:49.018511 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:49:49.020964 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 18:49:49.022294 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 18:49:49.023086 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 18:49:49.024054 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 18:49:49.024869 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 18:49:49.025640 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 18:49:49.026386 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 18:49:49.026417 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:49:49.027102 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:49:49.028891 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 18:49:49.031066 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 18:49:49.033742 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 18:49:49.034662 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 18:49:49.035405 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 18:49:49.038099 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 18:49:49.039328 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 18:49:49.040979 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 18:49:49.041906 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 18:49:49.043717 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:49:49.044410 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:49:49.045200 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:49:49.045235 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:49:49.046433 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 18:49:49.049610 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 18:49:49.053867 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 18:49:49.056048 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 18:49:49.059643 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 18:49:49.063805 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 18:49:49.095138 jq[1517]: false Jan 23 18:49:49.097050 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 18:49:49.103722 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 18:49:49.108851 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 18:49:49.116933 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 18:49:49.120784 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Refreshing passwd entry cache Jan 23 18:49:49.120792 oslogin_cache_refresh[1521]: Refreshing passwd entry cache Jan 23 18:49:49.123318 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Failure getting users, quitting Jan 23 18:49:49.123318 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:49:49.123318 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Refreshing group entry cache Jan 23 18:49:49.123254 oslogin_cache_refresh[1521]: Failure getting users, quitting Jan 23 18:49:49.123269 oslogin_cache_refresh[1521]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:49:49.123308 oslogin_cache_refresh[1521]: Refreshing group entry cache Jan 23 18:49:49.123834 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Failure getting groups, quitting Jan 23 18:49:49.123834 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:49:49.123777 oslogin_cache_refresh[1521]: Failure getting groups, quitting Jan 23 18:49:49.123787 oslogin_cache_refresh[1521]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:49:49.124621 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 18:49:49.130918 extend-filesystems[1518]: Found /dev/sda6 Jan 23 18:49:49.134313 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 18:49:49.143131 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 18:49:49.144022 extend-filesystems[1518]: Found /dev/sda9 Jan 23 18:49:49.148336 extend-filesystems[1518]: Checking size of /dev/sda9 Jan 23 18:49:49.146145 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 18:49:49.147769 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 18:49:49.149375 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 18:49:49.157627 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 18:49:49.163547 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 18:49:49.172805 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 18:49:49.175350 extend-filesystems[1518]: Resized partition /dev/sda9 Jan 23 18:49:49.176003 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 18:49:49.186746 update_engine[1536]: I20260123 18:49:49.184180 1536 main.cc:92] Flatcar Update Engine starting Jan 23 18:49:49.187043 coreos-metadata[1514]: Jan 23 18:49:49.179 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jan 23 18:49:49.176470 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 18:49:49.177816 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 18:49:49.183200 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 18:49:49.203416 extend-filesystems[1555]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 18:49:49.210554 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Jan 23 18:49:49.183516 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 18:49:49.220633 jq[1537]: true Jan 23 18:49:49.243540 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 18:49:49.243894 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 18:49:49.258237 dbus-daemon[1515]: [system] SELinux support is enabled Jan 23 18:49:49.258581 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 18:49:49.266644 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 18:49:49.266680 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 18:49:49.268621 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 18:49:49.268646 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 18:49:49.270872 (ntainerd)[1560]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 18:49:49.274549 tar[1544]: linux-amd64/LICENSE Jan 23 18:49:49.276387 tar[1544]: linux-amd64/helm Jan 23 18:49:49.279980 jq[1557]: true Jan 23 18:49:49.287713 update_engine[1536]: I20260123 18:49:49.285084 1536 update_check_scheduler.cc:74] Next update check in 8m51s Jan 23 18:49:49.283266 systemd[1]: Started update-engine.service - Update Engine. Jan 23 18:49:49.299465 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 18:49:49.330991 systemd-logind[1531]: Watching system buttons on /dev/input/event2 (Power Button) Jan 23 18:49:49.331019 systemd-logind[1531]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 18:49:49.333076 systemd-logind[1531]: New seat seat0. Jan 23 18:49:49.334482 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 18:49:49.386847 bash[1582]: Updated "/home/core/.ssh/authorized_keys" Jan 23 18:49:49.391708 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 18:49:49.406819 systemd[1]: Starting sshkeys.service... Jan 23 18:49:49.435263 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 18:49:49.438608 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 18:49:49.495534 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Jan 23 18:49:49.525205 extend-filesystems[1555]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 23 18:49:49.525205 extend-filesystems[1555]: old_desc_blocks = 1, new_desc_blocks = 10 Jan 23 18:49:49.525205 extend-filesystems[1555]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Jan 23 18:49:49.524461 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 18:49:49.531873 extend-filesystems[1518]: Resized filesystem in /dev/sda9 Jan 23 18:49:49.525581 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 18:49:49.557721 locksmithd[1567]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 18:49:49.602268 coreos-metadata[1586]: Jan 23 18:49:49.601 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Jan 23 18:49:49.661130 containerd[1560]: time="2026-01-23T18:49:49Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 18:49:49.662147 containerd[1560]: time="2026-01-23T18:49:49.662124306Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 18:49:49.670015 containerd[1560]: time="2026-01-23T18:49:49.669983030Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.17µs" Jan 23 18:49:49.670604 containerd[1560]: time="2026-01-23T18:49:49.670576041Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 18:49:49.670646 containerd[1560]: time="2026-01-23T18:49:49.670608701Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 18:49:49.670800 containerd[1560]: time="2026-01-23T18:49:49.670774531Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 18:49:49.670827 containerd[1560]: time="2026-01-23T18:49:49.670801681Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 18:49:49.670846 containerd[1560]: time="2026-01-23T18:49:49.670829451Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:49:49.670915 containerd[1560]: time="2026-01-23T18:49:49.670893771Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:49:49.670937 containerd[1560]: time="2026-01-23T18:49:49.670913071Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:49:49.671190 containerd[1560]: time="2026-01-23T18:49:49.671165781Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:49:49.671190 containerd[1560]: time="2026-01-23T18:49:49.671187861Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:49:49.671239 containerd[1560]: time="2026-01-23T18:49:49.671200941Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:49:49.671239 containerd[1560]: time="2026-01-23T18:49:49.671210731Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 18:49:49.671321 containerd[1560]: time="2026-01-23T18:49:49.671299311Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 18:49:49.674627 containerd[1560]: time="2026-01-23T18:49:49.674602873Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:49:49.674665 containerd[1560]: time="2026-01-23T18:49:49.674647813Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:49:49.674665 containerd[1560]: time="2026-01-23T18:49:49.674661033Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 18:49:49.674701 containerd[1560]: time="2026-01-23T18:49:49.674684933Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 18:49:49.675111 containerd[1560]: time="2026-01-23T18:49:49.675026583Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 18:49:49.675111 containerd[1560]: time="2026-01-23T18:49:49.675095733Z" level=info msg="metadata content store policy set" policy=shared Jan 23 18:49:49.680126 containerd[1560]: time="2026-01-23T18:49:49.680096605Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 18:49:49.680189 containerd[1560]: time="2026-01-23T18:49:49.680166245Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 18:49:49.680212 containerd[1560]: time="2026-01-23T18:49:49.680190896Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 18:49:49.680422 containerd[1560]: time="2026-01-23T18:49:49.680239676Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 18:49:49.680422 containerd[1560]: time="2026-01-23T18:49:49.680272196Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 18:49:49.680422 containerd[1560]: time="2026-01-23T18:49:49.680283776Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 18:49:49.680422 containerd[1560]: time="2026-01-23T18:49:49.680294406Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 18:49:49.680422 containerd[1560]: time="2026-01-23T18:49:49.680304486Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 18:49:49.680422 containerd[1560]: time="2026-01-23T18:49:49.680313516Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 18:49:49.680422 containerd[1560]: time="2026-01-23T18:49:49.680332496Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 18:49:49.680422 containerd[1560]: time="2026-01-23T18:49:49.680342426Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 18:49:49.680422 containerd[1560]: time="2026-01-23T18:49:49.680352786Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 18:49:49.680617 containerd[1560]: time="2026-01-23T18:49:49.680464546Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 18:49:49.680617 containerd[1560]: time="2026-01-23T18:49:49.680483516Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 18:49:49.680617 containerd[1560]: time="2026-01-23T18:49:49.680539196Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 18:49:49.680617 containerd[1560]: time="2026-01-23T18:49:49.680553856Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 18:49:49.680617 containerd[1560]: time="2026-01-23T18:49:49.680563116Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 18:49:49.680617 containerd[1560]: time="2026-01-23T18:49:49.680571846Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 18:49:49.680617 containerd[1560]: time="2026-01-23T18:49:49.680581656Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 18:49:49.680617 containerd[1560]: time="2026-01-23T18:49:49.680590916Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 18:49:49.680617 containerd[1560]: time="2026-01-23T18:49:49.680600346Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 18:49:49.680617 containerd[1560]: time="2026-01-23T18:49:49.680609116Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 18:49:49.680617 containerd[1560]: time="2026-01-23T18:49:49.680617676Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 18:49:49.680788 containerd[1560]: time="2026-01-23T18:49:49.680649006Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 18:49:49.680788 containerd[1560]: time="2026-01-23T18:49:49.680659766Z" level=info msg="Start snapshots syncer" Jan 23 18:49:49.682612 containerd[1560]: time="2026-01-23T18:49:49.682527427Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 18:49:49.682840 containerd[1560]: time="2026-01-23T18:49:49.682805227Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 18:49:49.682938 containerd[1560]: time="2026-01-23T18:49:49.682855297Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 18:49:49.682938 containerd[1560]: time="2026-01-23T18:49:49.682909857Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 18:49:49.683135 containerd[1560]: time="2026-01-23T18:49:49.683055517Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 18:49:49.683135 containerd[1560]: time="2026-01-23T18:49:49.683077377Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 18:49:49.683135 containerd[1560]: time="2026-01-23T18:49:49.683086797Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 18:49:49.683135 containerd[1560]: time="2026-01-23T18:49:49.683095637Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 18:49:49.683135 containerd[1560]: time="2026-01-23T18:49:49.683111047Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 18:49:49.683135 containerd[1560]: time="2026-01-23T18:49:49.683120747Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 18:49:49.683135 containerd[1560]: time="2026-01-23T18:49:49.683130287Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 18:49:49.683250 containerd[1560]: time="2026-01-23T18:49:49.683149557Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 18:49:49.683250 containerd[1560]: time="2026-01-23T18:49:49.683158627Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 18:49:49.683250 containerd[1560]: time="2026-01-23T18:49:49.683167697Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 18:49:49.683250 containerd[1560]: time="2026-01-23T18:49:49.683207647Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:49:49.683250 containerd[1560]: time="2026-01-23T18:49:49.683220717Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:49:49.683250 containerd[1560]: time="2026-01-23T18:49:49.683228797Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:49:49.683250 containerd[1560]: time="2026-01-23T18:49:49.683237157Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:49:49.683366 containerd[1560]: time="2026-01-23T18:49:49.683244007Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 18:49:49.683366 containerd[1560]: time="2026-01-23T18:49:49.683305077Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 18:49:49.683366 containerd[1560]: time="2026-01-23T18:49:49.683321807Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 18:49:49.683366 containerd[1560]: time="2026-01-23T18:49:49.683336717Z" level=info msg="runtime interface created" Jan 23 18:49:49.683366 containerd[1560]: time="2026-01-23T18:49:49.683341757Z" level=info msg="created NRI interface" Jan 23 18:49:49.683366 containerd[1560]: time="2026-01-23T18:49:49.683354467Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 18:49:49.683366 containerd[1560]: time="2026-01-23T18:49:49.683364407Z" level=info msg="Connect containerd service" Jan 23 18:49:49.683524 containerd[1560]: time="2026-01-23T18:49:49.683380457Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 18:49:49.686346 containerd[1560]: time="2026-01-23T18:49:49.686276809Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 18:49:49.761962 sshd_keygen[1564]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 18:49:49.800938 systemd-networkd[1442]: eth0: DHCPv4 address 172.239.197.220/24, gateway 172.239.197.1 acquired from 23.40.197.134 Jan 23 18:49:49.801981 dbus-daemon[1515]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1442 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 18:49:49.804929 systemd-timesyncd[1458]: Network configuration changed, trying to establish connection. Jan 23 18:49:49.807481 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 18:49:49.810555 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 18:49:49.814805 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 18:49:49.828732 containerd[1560]: time="2026-01-23T18:49:49.828156439Z" level=info msg="Start subscribing containerd event" Jan 23 18:49:49.828732 containerd[1560]: time="2026-01-23T18:49:49.828196239Z" level=info msg="Start recovering state" Jan 23 18:49:49.828732 containerd[1560]: time="2026-01-23T18:49:49.828279389Z" level=info msg="Start event monitor" Jan 23 18:49:49.828732 containerd[1560]: time="2026-01-23T18:49:49.828291290Z" level=info msg="Start cni network conf syncer for default" Jan 23 18:49:49.828732 containerd[1560]: time="2026-01-23T18:49:49.828297970Z" level=info msg="Start streaming server" Jan 23 18:49:49.828732 containerd[1560]: time="2026-01-23T18:49:49.828305630Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 18:49:49.828732 containerd[1560]: time="2026-01-23T18:49:49.828312640Z" level=info msg="runtime interface starting up..." Jan 23 18:49:49.828732 containerd[1560]: time="2026-01-23T18:49:49.828318070Z" level=info msg="starting plugins..." Jan 23 18:49:49.828732 containerd[1560]: time="2026-01-23T18:49:49.828333700Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 18:49:49.829848 containerd[1560]: time="2026-01-23T18:49:49.829761610Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 18:49:49.830469 containerd[1560]: time="2026-01-23T18:49:49.829994170Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 18:49:49.830756 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 18:49:49.831700 containerd[1560]: time="2026-01-23T18:49:49.831683941Z" level=info msg="containerd successfully booted in 0.170989s" Jan 23 18:49:49.839986 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 18:49:49.840571 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 18:49:49.845722 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 18:49:49.876786 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 18:49:49.881817 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 18:49:49.887789 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 18:49:49.889867 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 18:49:49.916732 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 18:49:49.919544 dbus-daemon[1515]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 18:49:49.920014 dbus-daemon[1515]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1623 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 18:49:49.925420 tar[1544]: linux-amd64/README.md Jan 23 18:49:49.926722 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 18:49:49.943714 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 18:49:50.002677 polkitd[1632]: Started polkitd version 126 Jan 23 18:49:50.005864 polkitd[1632]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 18:49:50.006115 polkitd[1632]: Loading rules from directory /run/polkit-1/rules.d Jan 23 18:49:50.006163 polkitd[1632]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 18:49:50.006350 polkitd[1632]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 23 18:49:50.006377 polkitd[1632]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 18:49:50.006412 polkitd[1632]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 18:49:50.006913 polkitd[1632]: Finished loading, compiling and executing 2 rules Jan 23 18:49:50.007115 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 18:49:50.008118 dbus-daemon[1515]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 18:49:50.008385 polkitd[1632]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 18:49:50.016725 systemd-hostnamed[1623]: Hostname set to <172-239-197-220> (transient) Jan 23 18:49:50.016837 systemd-resolved[1443]: System hostname changed to '172-239-197-220'. Jan 23 18:49:50.961892 systemd-resolved[1443]: Clock change detected. Flushing caches. Jan 23 18:49:50.962041 systemd-timesyncd[1458]: Contacted time server 99.28.14.242:123 (0.flatcar.pool.ntp.org). Jan 23 18:49:50.962094 systemd-timesyncd[1458]: Initial clock synchronization to Fri 2026-01-23 18:49:50.961848 UTC. Jan 23 18:49:51.103080 systemd-networkd[1442]: eth0: Gained IPv6LL Jan 23 18:49:51.108380 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 18:49:51.112921 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 18:49:51.119057 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:49:51.122796 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 18:49:51.126978 coreos-metadata[1514]: Jan 23 18:49:51.126 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Jan 23 18:49:51.168362 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 18:49:51.230958 coreos-metadata[1514]: Jan 23 18:49:51.229 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Jan 23 18:49:51.410960 coreos-metadata[1514]: Jan 23 18:49:51.410 INFO Fetch successful Jan 23 18:49:51.410960 coreos-metadata[1514]: Jan 23 18:49:51.410 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Jan 23 18:49:51.546625 coreos-metadata[1586]: Jan 23 18:49:51.546 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Jan 23 18:49:51.643372 coreos-metadata[1586]: Jan 23 18:49:51.643 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Jan 23 18:49:51.666996 coreos-metadata[1514]: Jan 23 18:49:51.666 INFO Fetch successful Jan 23 18:49:51.780529 coreos-metadata[1586]: Jan 23 18:49:51.780 INFO Fetch successful Jan 23 18:49:51.793880 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 18:49:51.797126 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 18:49:51.804513 update-ssh-keys[1679]: Updated "/home/core/.ssh/authorized_keys" Jan 23 18:49:51.806285 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 18:49:51.810982 systemd[1]: Finished sshkeys.service. Jan 23 18:49:52.133537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:49:52.135330 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 18:49:52.136560 systemd[1]: Startup finished in 2.927s (kernel) + 8.459s (initrd) + 5.151s (userspace) = 16.538s. Jan 23 18:49:52.145059 (kubelet)[1688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:49:52.718386 kubelet[1688]: E0123 18:49:52.718313 1688 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:49:52.722415 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:49:52.722620 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:49:52.723058 systemd[1]: kubelet.service: Consumed 989ms CPU time, 267.4M memory peak. Jan 23 18:49:53.396060 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 18:49:53.397189 systemd[1]: Started sshd@0-172.239.197.220:22-68.220.241.50:46870.service - OpenSSH per-connection server daemon (68.220.241.50:46870). Jan 23 18:49:53.599813 sshd[1700]: Accepted publickey for core from 68.220.241.50 port 46870 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 18:49:53.601971 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:49:53.611540 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 18:49:53.612889 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 18:49:53.624666 systemd-logind[1531]: New session 1 of user core. Jan 23 18:49:53.635137 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 18:49:53.638578 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 18:49:53.653097 (systemd)[1705]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 18:49:53.656663 systemd-logind[1531]: New session c1 of user core. Jan 23 18:49:53.829271 systemd[1705]: Queued start job for default target default.target. Jan 23 18:49:53.845447 systemd[1705]: Created slice app.slice - User Application Slice. Jan 23 18:49:53.845476 systemd[1705]: Reached target paths.target - Paths. Jan 23 18:49:53.845820 systemd[1705]: Reached target timers.target - Timers. Jan 23 18:49:53.847469 systemd[1705]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 18:49:53.860046 systemd[1705]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 18:49:53.860171 systemd[1705]: Reached target sockets.target - Sockets. Jan 23 18:49:53.860219 systemd[1705]: Reached target basic.target - Basic System. Jan 23 18:49:53.860264 systemd[1705]: Reached target default.target - Main User Target. Jan 23 18:49:53.860304 systemd[1705]: Startup finished in 196ms. Jan 23 18:49:53.860420 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 18:49:53.873957 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 18:49:54.031115 systemd[1]: Started sshd@1-172.239.197.220:22-68.220.241.50:46876.service - OpenSSH per-connection server daemon (68.220.241.50:46876). Jan 23 18:49:54.192863 sshd[1716]: Accepted publickey for core from 68.220.241.50 port 46876 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 18:49:54.193966 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:49:54.200108 systemd-logind[1531]: New session 2 of user core. Jan 23 18:49:54.214961 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 18:49:54.329922 sshd[1719]: Connection closed by 68.220.241.50 port 46876 Jan 23 18:49:54.330994 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Jan 23 18:49:54.335403 systemd[1]: sshd@1-172.239.197.220:22-68.220.241.50:46876.service: Deactivated successfully. Jan 23 18:49:54.337814 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 18:49:54.338876 systemd-logind[1531]: Session 2 logged out. Waiting for processes to exit. Jan 23 18:49:54.340257 systemd-logind[1531]: Removed session 2. Jan 23 18:49:54.360123 systemd[1]: Started sshd@2-172.239.197.220:22-68.220.241.50:46890.service - OpenSSH per-connection server daemon (68.220.241.50:46890). Jan 23 18:49:54.526831 sshd[1725]: Accepted publickey for core from 68.220.241.50 port 46890 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 18:49:54.528230 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:49:54.533821 systemd-logind[1531]: New session 3 of user core. Jan 23 18:49:54.544906 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 18:49:54.654514 sshd[1728]: Connection closed by 68.220.241.50 port 46890 Jan 23 18:49:54.656537 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Jan 23 18:49:54.661754 systemd[1]: sshd@2-172.239.197.220:22-68.220.241.50:46890.service: Deactivated successfully. Jan 23 18:49:54.664130 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 18:49:54.665244 systemd-logind[1531]: Session 3 logged out. Waiting for processes to exit. Jan 23 18:49:54.667043 systemd-logind[1531]: Removed session 3. Jan 23 18:49:54.686187 systemd[1]: Started sshd@3-172.239.197.220:22-68.220.241.50:46898.service - OpenSSH per-connection server daemon (68.220.241.50:46898). Jan 23 18:49:54.845529 sshd[1734]: Accepted publickey for core from 68.220.241.50 port 46898 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 18:49:54.848149 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:49:54.854321 systemd-logind[1531]: New session 4 of user core. Jan 23 18:49:54.863924 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 18:49:54.979787 sshd[1737]: Connection closed by 68.220.241.50 port 46898 Jan 23 18:49:54.980339 sshd-session[1734]: pam_unix(sshd:session): session closed for user core Jan 23 18:49:54.984213 systemd-logind[1531]: Session 4 logged out. Waiting for processes to exit. Jan 23 18:49:54.984393 systemd[1]: sshd@3-172.239.197.220:22-68.220.241.50:46898.service: Deactivated successfully. Jan 23 18:49:54.986481 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 18:49:54.987894 systemd-logind[1531]: Removed session 4. Jan 23 18:49:55.007542 systemd[1]: Started sshd@4-172.239.197.220:22-68.220.241.50:46904.service - OpenSSH per-connection server daemon (68.220.241.50:46904). Jan 23 18:49:55.163944 sshd[1743]: Accepted publickey for core from 68.220.241.50 port 46904 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 18:49:55.165412 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:49:55.170281 systemd-logind[1531]: New session 5 of user core. Jan 23 18:49:55.172892 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 18:49:55.281460 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 18:49:55.281814 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:49:55.298042 sudo[1747]: pam_unix(sudo:session): session closed for user root Jan 23 18:49:55.319413 sshd[1746]: Connection closed by 68.220.241.50 port 46904 Jan 23 18:49:55.321023 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Jan 23 18:49:55.327107 systemd[1]: sshd@4-172.239.197.220:22-68.220.241.50:46904.service: Deactivated successfully. Jan 23 18:49:55.330149 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 18:49:55.332050 systemd-logind[1531]: Session 5 logged out. Waiting for processes to exit. Jan 23 18:49:55.333677 systemd-logind[1531]: Removed session 5. Jan 23 18:49:55.356465 systemd[1]: Started sshd@5-172.239.197.220:22-68.220.241.50:46906.service - OpenSSH per-connection server daemon (68.220.241.50:46906). Jan 23 18:49:55.534285 sshd[1753]: Accepted publickey for core from 68.220.241.50 port 46906 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 18:49:55.537046 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:49:55.544532 systemd-logind[1531]: New session 6 of user core. Jan 23 18:49:55.548914 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 18:49:55.645751 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 18:49:55.646647 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:49:55.655790 sudo[1758]: pam_unix(sudo:session): session closed for user root Jan 23 18:49:55.662615 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 18:49:55.663053 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:49:55.674706 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 18:49:55.724045 augenrules[1780]: No rules Jan 23 18:49:55.725505 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 18:49:55.725990 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 18:49:55.727077 sudo[1757]: pam_unix(sudo:session): session closed for user root Jan 23 18:49:55.748163 sshd[1756]: Connection closed by 68.220.241.50 port 46906 Jan 23 18:49:55.749936 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Jan 23 18:49:55.754625 systemd-logind[1531]: Session 6 logged out. Waiting for processes to exit. Jan 23 18:49:55.755198 systemd[1]: sshd@5-172.239.197.220:22-68.220.241.50:46906.service: Deactivated successfully. Jan 23 18:49:55.757262 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 18:49:55.759219 systemd-logind[1531]: Removed session 6. Jan 23 18:49:55.784938 systemd[1]: Started sshd@6-172.239.197.220:22-68.220.241.50:46920.service - OpenSSH per-connection server daemon (68.220.241.50:46920). Jan 23 18:49:55.984831 sshd[1789]: Accepted publickey for core from 68.220.241.50 port 46920 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 18:49:55.986673 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:49:55.993234 systemd-logind[1531]: New session 7 of user core. Jan 23 18:49:55.999942 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 18:49:56.107175 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 18:49:56.107682 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:49:56.406695 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 18:49:56.425173 (dockerd)[1811]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 18:49:56.671650 dockerd[1811]: time="2026-01-23T18:49:56.670625687Z" level=info msg="Starting up" Jan 23 18:49:56.672752 dockerd[1811]: time="2026-01-23T18:49:56.672708249Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 18:49:56.685166 dockerd[1811]: time="2026-01-23T18:49:56.685124375Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 18:49:56.728726 dockerd[1811]: time="2026-01-23T18:49:56.728684126Z" level=info msg="Loading containers: start." Jan 23 18:49:56.741954 kernel: Initializing XFRM netlink socket Jan 23 18:49:57.031043 systemd-networkd[1442]: docker0: Link UP Jan 23 18:49:57.033747 dockerd[1811]: time="2026-01-23T18:49:57.033708309Z" level=info msg="Loading containers: done." Jan 23 18:49:57.049944 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2705127196-merged.mount: Deactivated successfully. Jan 23 18:49:57.051415 dockerd[1811]: time="2026-01-23T18:49:57.051008348Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 18:49:57.051415 dockerd[1811]: time="2026-01-23T18:49:57.051107228Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 18:49:57.051415 dockerd[1811]: time="2026-01-23T18:49:57.051214968Z" level=info msg="Initializing buildkit" Jan 23 18:49:57.071497 dockerd[1811]: time="2026-01-23T18:49:57.071465708Z" level=info msg="Completed buildkit initialization" Jan 23 18:49:57.079353 dockerd[1811]: time="2026-01-23T18:49:57.079323322Z" level=info msg="Daemon has completed initialization" Jan 23 18:49:57.079447 dockerd[1811]: time="2026-01-23T18:49:57.079364022Z" level=info msg="API listen on /run/docker.sock" Jan 23 18:49:57.079633 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 18:49:57.751044 containerd[1560]: time="2026-01-23T18:49:57.751004007Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 23 18:49:58.298666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3636289247.mount: Deactivated successfully. Jan 23 18:49:59.625936 containerd[1560]: time="2026-01-23T18:49:59.625842074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:59.627051 containerd[1560]: time="2026-01-23T18:49:59.626972835Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114718" Jan 23 18:49:59.627629 containerd[1560]: time="2026-01-23T18:49:59.627592705Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:59.630395 containerd[1560]: time="2026-01-23T18:49:59.629947756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:59.631007 containerd[1560]: time="2026-01-23T18:49:59.630922137Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 1.87987972s" Jan 23 18:49:59.631054 containerd[1560]: time="2026-01-23T18:49:59.631017247Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 23 18:49:59.631874 containerd[1560]: time="2026-01-23T18:49:59.631839807Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 23 18:50:00.940465 containerd[1560]: time="2026-01-23T18:50:00.940397571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:00.941737 containerd[1560]: time="2026-01-23T18:50:00.941524691Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016787" Jan 23 18:50:00.942283 containerd[1560]: time="2026-01-23T18:50:00.942259102Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:00.944631 containerd[1560]: time="2026-01-23T18:50:00.944603533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:00.945605 containerd[1560]: time="2026-01-23T18:50:00.945578633Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.313639356s" Jan 23 18:50:00.945687 containerd[1560]: time="2026-01-23T18:50:00.945673134Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 23 18:50:00.950614 containerd[1560]: time="2026-01-23T18:50:00.950532816Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 23 18:50:02.031252 containerd[1560]: time="2026-01-23T18:50:02.031182736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:02.032437 containerd[1560]: time="2026-01-23T18:50:02.032330136Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158108" Jan 23 18:50:02.033122 containerd[1560]: time="2026-01-23T18:50:02.033080067Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:02.035402 containerd[1560]: time="2026-01-23T18:50:02.035364318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:02.036407 containerd[1560]: time="2026-01-23T18:50:02.036368979Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.085748753s" Jan 23 18:50:02.036490 containerd[1560]: time="2026-01-23T18:50:02.036474659Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 23 18:50:02.037271 containerd[1560]: time="2026-01-23T18:50:02.037230009Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 23 18:50:02.727230 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 18:50:02.731261 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:50:02.936258 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:50:02.944369 (kubelet)[2099]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:50:02.993796 kubelet[2099]: E0123 18:50:02.990676 2099 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:50:02.999749 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:50:02.999962 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:50:03.000339 systemd[1]: kubelet.service: Consumed 205ms CPU time, 107.7M memory peak. Jan 23 18:50:03.085316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3314375220.mount: Deactivated successfully. Jan 23 18:50:03.489972 containerd[1560]: time="2026-01-23T18:50:03.489921575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:03.491103 containerd[1560]: time="2026-01-23T18:50:03.491046705Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930102" Jan 23 18:50:03.491289 containerd[1560]: time="2026-01-23T18:50:03.491267735Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:03.493302 containerd[1560]: time="2026-01-23T18:50:03.493282706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:03.494019 containerd[1560]: time="2026-01-23T18:50:03.493997607Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.456735598s" Jan 23 18:50:03.494108 containerd[1560]: time="2026-01-23T18:50:03.494088567Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 23 18:50:03.497114 containerd[1560]: time="2026-01-23T18:50:03.497043438Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 23 18:50:04.006876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3649744777.mount: Deactivated successfully. Jan 23 18:50:04.763712 containerd[1560]: time="2026-01-23T18:50:04.763667351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:04.764815 containerd[1560]: time="2026-01-23T18:50:04.764764832Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942244" Jan 23 18:50:04.765847 containerd[1560]: time="2026-01-23T18:50:04.765825372Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:04.768030 containerd[1560]: time="2026-01-23T18:50:04.767989923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:04.768866 containerd[1560]: time="2026-01-23T18:50:04.768828414Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.271742986s" Jan 23 18:50:04.768866 containerd[1560]: time="2026-01-23T18:50:04.768856394Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 23 18:50:04.769281 containerd[1560]: time="2026-01-23T18:50:04.769255344Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 18:50:05.300840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3931178767.mount: Deactivated successfully. Jan 23 18:50:05.308318 containerd[1560]: time="2026-01-23T18:50:05.308280613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:50:05.308911 containerd[1560]: time="2026-01-23T18:50:05.308883364Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321144" Jan 23 18:50:05.310270 containerd[1560]: time="2026-01-23T18:50:05.309346214Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:50:05.310797 containerd[1560]: time="2026-01-23T18:50:05.310759745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:50:05.311476 containerd[1560]: time="2026-01-23T18:50:05.311454185Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 542.177201ms" Jan 23 18:50:05.311550 containerd[1560]: time="2026-01-23T18:50:05.311535735Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 23 18:50:05.312095 containerd[1560]: time="2026-01-23T18:50:05.312070265Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 23 18:50:05.829802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1991371669.mount: Deactivated successfully. Jan 23 18:50:07.511602 containerd[1560]: time="2026-01-23T18:50:07.511552204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:07.512603 containerd[1560]: time="2026-01-23T18:50:07.512509985Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926233" Jan 23 18:50:07.513127 containerd[1560]: time="2026-01-23T18:50:07.513101015Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:07.515428 containerd[1560]: time="2026-01-23T18:50:07.515406606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:07.516807 containerd[1560]: time="2026-01-23T18:50:07.516260557Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.204169242s" Jan 23 18:50:07.516807 containerd[1560]: time="2026-01-23T18:50:07.516287837Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 23 18:50:10.655348 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:50:10.655568 systemd[1]: kubelet.service: Consumed 205ms CPU time, 107.7M memory peak. Jan 23 18:50:10.658069 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:50:10.689911 systemd[1]: Reload requested from client PID 2250 ('systemctl') (unit session-7.scope)... Jan 23 18:50:10.690093 systemd[1]: Reloading... Jan 23 18:50:10.862277 zram_generator::config[2294]: No configuration found. Jan 23 18:50:11.116047 systemd[1]: Reloading finished in 425 ms. Jan 23 18:50:11.173852 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 18:50:11.173964 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 18:50:11.174442 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:50:11.174502 systemd[1]: kubelet.service: Consumed 169ms CPU time, 98.3M memory peak. Jan 23 18:50:11.177141 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:50:11.372439 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:50:11.380394 (kubelet)[2349]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 18:50:11.417574 kubelet[2349]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:50:11.417574 kubelet[2349]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 18:50:11.417574 kubelet[2349]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:50:11.417970 kubelet[2349]: I0123 18:50:11.417671 2349 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:50:11.768106 kubelet[2349]: I0123 18:50:11.768067 2349 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 18:50:11.768106 kubelet[2349]: I0123 18:50:11.768090 2349 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:50:11.768319 kubelet[2349]: I0123 18:50:11.768298 2349 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 18:50:11.793725 kubelet[2349]: I0123 18:50:11.793709 2349 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 18:50:11.794633 kubelet[2349]: E0123 18:50:11.794561 2349 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.239.197.220:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.239.197.220:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 18:50:11.807748 kubelet[2349]: I0123 18:50:11.807722 2349 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:50:11.813123 kubelet[2349]: I0123 18:50:11.813101 2349 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 18:50:11.813357 kubelet[2349]: I0123 18:50:11.813325 2349 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:50:11.813504 kubelet[2349]: I0123 18:50:11.813357 2349 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-239-197-220","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:50:11.813619 kubelet[2349]: I0123 18:50:11.813509 2349 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:50:11.813619 kubelet[2349]: I0123 18:50:11.813518 2349 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 18:50:11.814386 kubelet[2349]: I0123 18:50:11.814360 2349 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:50:11.816680 kubelet[2349]: I0123 18:50:11.816646 2349 kubelet.go:480] "Attempting to sync node with API server" Jan 23 18:50:11.816723 kubelet[2349]: I0123 18:50:11.816680 2349 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:50:11.816723 kubelet[2349]: I0123 18:50:11.816719 2349 kubelet.go:386] "Adding apiserver pod source" Jan 23 18:50:11.818681 kubelet[2349]: I0123 18:50:11.818623 2349 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:50:11.825213 kubelet[2349]: E0123 18:50:11.824900 2349 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.239.197.220:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-239-197-220&limit=500&resourceVersion=0\": dial tcp 172.239.197.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 18:50:11.826422 kubelet[2349]: E0123 18:50:11.826402 2349 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.239.197.220:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.239.197.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 18:50:11.826571 kubelet[2349]: I0123 18:50:11.826545 2349 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 18:50:11.827081 kubelet[2349]: I0123 18:50:11.827067 2349 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 18:50:11.828675 kubelet[2349]: W0123 18:50:11.828661 2349 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 18:50:11.832126 kubelet[2349]: I0123 18:50:11.832111 2349 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 18:50:11.832242 kubelet[2349]: I0123 18:50:11.832231 2349 server.go:1289] "Started kubelet" Jan 23 18:50:11.834603 kubelet[2349]: I0123 18:50:11.834434 2349 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:50:11.838559 kubelet[2349]: E0123 18:50:11.837333 2349 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.239.197.220:6443/api/v1/namespaces/default/events\": dial tcp 172.239.197.220:6443: connect: connection refused" event="&Event{ObjectMeta:{172-239-197-220.188d70c316ef5d1b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-239-197-220,UID:172-239-197-220,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-239-197-220,},FirstTimestamp:2026-01-23 18:50:11.832192283 +0000 UTC m=+0.447587935,LastTimestamp:2026-01-23 18:50:11.832192283 +0000 UTC m=+0.447587935,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-239-197-220,}" Jan 23 18:50:11.840090 kubelet[2349]: I0123 18:50:11.839955 2349 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:50:11.841191 kubelet[2349]: I0123 18:50:11.841177 2349 server.go:317] "Adding debug handlers to kubelet server" Jan 23 18:50:11.846360 kubelet[2349]: I0123 18:50:11.846320 2349 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:50:11.846637 kubelet[2349]: I0123 18:50:11.846623 2349 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:50:11.846906 kubelet[2349]: I0123 18:50:11.846893 2349 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 18:50:11.847320 kubelet[2349]: I0123 18:50:11.847295 2349 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 18:50:11.847649 kubelet[2349]: E0123 18:50:11.847622 2349 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-239-197-220\" not found" Jan 23 18:50:11.850111 kubelet[2349]: E0123 18:50:11.849172 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.197.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-197-220?timeout=10s\": dial tcp 172.239.197.220:6443: connect: connection refused" interval="200ms" Jan 23 18:50:11.850111 kubelet[2349]: E0123 18:50:11.849362 2349 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 18:50:11.850111 kubelet[2349]: I0123 18:50:11.849424 2349 reconciler.go:26] "Reconciler: start to sync state" Jan 23 18:50:11.850111 kubelet[2349]: I0123 18:50:11.849449 2349 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 18:50:11.850111 kubelet[2349]: E0123 18:50:11.849678 2349 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.239.197.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.239.197.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 18:50:11.851709 kubelet[2349]: I0123 18:50:11.851689 2349 factory.go:223] Registration of the containerd container factory successfully Jan 23 18:50:11.851709 kubelet[2349]: I0123 18:50:11.851707 2349 factory.go:223] Registration of the systemd container factory successfully Jan 23 18:50:11.851816 kubelet[2349]: I0123 18:50:11.851757 2349 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 18:50:11.866045 kubelet[2349]: I0123 18:50:11.866002 2349 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 18:50:11.873808 kubelet[2349]: I0123 18:50:11.873713 2349 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 18:50:11.873808 kubelet[2349]: I0123 18:50:11.873746 2349 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 18:50:11.873808 kubelet[2349]: I0123 18:50:11.873808 2349 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 18:50:11.873914 kubelet[2349]: I0123 18:50:11.873820 2349 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 18:50:11.873914 kubelet[2349]: E0123 18:50:11.873867 2349 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 18:50:11.877412 kubelet[2349]: I0123 18:50:11.877216 2349 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 18:50:11.877412 kubelet[2349]: I0123 18:50:11.877227 2349 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 18:50:11.877412 kubelet[2349]: I0123 18:50:11.877242 2349 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:50:11.878764 kubelet[2349]: I0123 18:50:11.878749 2349 policy_none.go:49] "None policy: Start" Jan 23 18:50:11.878853 kubelet[2349]: I0123 18:50:11.878843 2349 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 18:50:11.878910 kubelet[2349]: I0123 18:50:11.878900 2349 state_mem.go:35] "Initializing new in-memory state store" Jan 23 18:50:11.882368 kubelet[2349]: E0123 18:50:11.882349 2349 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.239.197.220:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.239.197.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 18:50:11.885839 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 18:50:11.899686 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 18:50:11.913890 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 18:50:11.916630 kubelet[2349]: E0123 18:50:11.916605 2349 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 18:50:11.916843 kubelet[2349]: I0123 18:50:11.916818 2349 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:50:11.916884 kubelet[2349]: I0123 18:50:11.916846 2349 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:50:11.917940 kubelet[2349]: I0123 18:50:11.917920 2349 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:50:11.919295 kubelet[2349]: E0123 18:50:11.918937 2349 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 18:50:11.919295 kubelet[2349]: E0123 18:50:11.918997 2349 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-239-197-220\" not found" Jan 23 18:50:11.985108 systemd[1]: Created slice kubepods-burstable-podc277d3d1589d8dcc5cce8593e1b3b23d.slice - libcontainer container kubepods-burstable-podc277d3d1589d8dcc5cce8593e1b3b23d.slice. Jan 23 18:50:12.003026 kubelet[2349]: E0123 18:50:12.002987 2349 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-197-220\" not found" node="172-239-197-220" Jan 23 18:50:12.006072 systemd[1]: Created slice kubepods-burstable-podc34fd9332156ff57ab1ca58fb26c8eec.slice - libcontainer container kubepods-burstable-podc34fd9332156ff57ab1ca58fb26c8eec.slice. Jan 23 18:50:12.017379 kubelet[2349]: E0123 18:50:12.017356 2349 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-197-220\" not found" node="172-239-197-220" Jan 23 18:50:12.019255 kubelet[2349]: I0123 18:50:12.018554 2349 kubelet_node_status.go:75] "Attempting to register node" node="172-239-197-220" Jan 23 18:50:12.019421 kubelet[2349]: E0123 18:50:12.019383 2349 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.197.220:6443/api/v1/nodes\": dial tcp 172.239.197.220:6443: connect: connection refused" node="172-239-197-220" Jan 23 18:50:12.022693 systemd[1]: Created slice kubepods-burstable-podd8e9edf0b6f04089e2b0758f210cfcf8.slice - libcontainer container kubepods-burstable-podd8e9edf0b6f04089e2b0758f210cfcf8.slice. Jan 23 18:50:12.024622 kubelet[2349]: E0123 18:50:12.024600 2349 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-197-220\" not found" node="172-239-197-220" Jan 23 18:50:12.049980 kubelet[2349]: E0123 18:50:12.049948 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.197.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-197-220?timeout=10s\": dial tcp 172.239.197.220:6443: connect: connection refused" interval="400ms" Jan 23 18:50:12.051164 kubelet[2349]: I0123 18:50:12.051146 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c277d3d1589d8dcc5cce8593e1b3b23d-kubeconfig\") pod \"kube-scheduler-172-239-197-220\" (UID: \"c277d3d1589d8dcc5cce8593e1b3b23d\") " pod="kube-system/kube-scheduler-172-239-197-220" Jan 23 18:50:12.051212 kubelet[2349]: I0123 18:50:12.051173 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c34fd9332156ff57ab1ca58fb26c8eec-ca-certs\") pod \"kube-apiserver-172-239-197-220\" (UID: \"c34fd9332156ff57ab1ca58fb26c8eec\") " pod="kube-system/kube-apiserver-172-239-197-220" Jan 23 18:50:12.051212 kubelet[2349]: I0123 18:50:12.051192 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c34fd9332156ff57ab1ca58fb26c8eec-k8s-certs\") pod \"kube-apiserver-172-239-197-220\" (UID: \"c34fd9332156ff57ab1ca58fb26c8eec\") " pod="kube-system/kube-apiserver-172-239-197-220" Jan 23 18:50:12.051212 kubelet[2349]: I0123 18:50:12.051208 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c34fd9332156ff57ab1ca58fb26c8eec-usr-share-ca-certificates\") pod \"kube-apiserver-172-239-197-220\" (UID: \"c34fd9332156ff57ab1ca58fb26c8eec\") " pod="kube-system/kube-apiserver-172-239-197-220" Jan 23 18:50:12.051300 kubelet[2349]: I0123 18:50:12.051236 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d8e9edf0b6f04089e2b0758f210cfcf8-ca-certs\") pod \"kube-controller-manager-172-239-197-220\" (UID: \"d8e9edf0b6f04089e2b0758f210cfcf8\") " pod="kube-system/kube-controller-manager-172-239-197-220" Jan 23 18:50:12.051300 kubelet[2349]: I0123 18:50:12.051253 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d8e9edf0b6f04089e2b0758f210cfcf8-usr-share-ca-certificates\") pod \"kube-controller-manager-172-239-197-220\" (UID: \"d8e9edf0b6f04089e2b0758f210cfcf8\") " pod="kube-system/kube-controller-manager-172-239-197-220" Jan 23 18:50:12.051300 kubelet[2349]: I0123 18:50:12.051268 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d8e9edf0b6f04089e2b0758f210cfcf8-flexvolume-dir\") pod \"kube-controller-manager-172-239-197-220\" (UID: \"d8e9edf0b6f04089e2b0758f210cfcf8\") " pod="kube-system/kube-controller-manager-172-239-197-220" Jan 23 18:50:12.051300 kubelet[2349]: I0123 18:50:12.051283 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d8e9edf0b6f04089e2b0758f210cfcf8-k8s-certs\") pod \"kube-controller-manager-172-239-197-220\" (UID: \"d8e9edf0b6f04089e2b0758f210cfcf8\") " pod="kube-system/kube-controller-manager-172-239-197-220" Jan 23 18:50:12.051379 kubelet[2349]: I0123 18:50:12.051299 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d8e9edf0b6f04089e2b0758f210cfcf8-kubeconfig\") pod \"kube-controller-manager-172-239-197-220\" (UID: \"d8e9edf0b6f04089e2b0758f210cfcf8\") " pod="kube-system/kube-controller-manager-172-239-197-220" Jan 23 18:50:12.221367 kubelet[2349]: I0123 18:50:12.221141 2349 kubelet_node_status.go:75] "Attempting to register node" node="172-239-197-220" Jan 23 18:50:12.224522 kubelet[2349]: E0123 18:50:12.224474 2349 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.197.220:6443/api/v1/nodes\": dial tcp 172.239.197.220:6443: connect: connection refused" node="172-239-197-220" Jan 23 18:50:12.303640 kubelet[2349]: E0123 18:50:12.303514 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:12.304986 containerd[1560]: time="2026-01-23T18:50:12.304939659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-239-197-220,Uid:c277d3d1589d8dcc5cce8593e1b3b23d,Namespace:kube-system,Attempt:0,}" Jan 23 18:50:12.318021 kubelet[2349]: E0123 18:50:12.317992 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:12.318756 containerd[1560]: time="2026-01-23T18:50:12.318514486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-239-197-220,Uid:c34fd9332156ff57ab1ca58fb26c8eec,Namespace:kube-system,Attempt:0,}" Jan 23 18:50:12.325109 kubelet[2349]: E0123 18:50:12.325084 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:12.327590 containerd[1560]: time="2026-01-23T18:50:12.327568341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-239-197-220,Uid:d8e9edf0b6f04089e2b0758f210cfcf8,Namespace:kube-system,Attempt:0,}" Jan 23 18:50:12.328107 containerd[1560]: time="2026-01-23T18:50:12.328085711Z" level=info msg="connecting to shim 79e8439126d03d4f42858f3f85c494b6cc78bafc9b245fbc5e9fe4ae0e343a35" address="unix:///run/containerd/s/c5e36caf6b32cf0c1d46a6cd19b06df7c33cb4f9ccd44dd3c51059964e8687da" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:12.350974 containerd[1560]: time="2026-01-23T18:50:12.350942972Z" level=info msg="connecting to shim 1c8da2eb930914dd42161551ae75a9afb5baacf5a39c8bde7576a78aab3d7a12" address="unix:///run/containerd/s/be5ecec097a1b79147b7f9cb93d0e7fe67f658fa9e1a6e6ab0315837b9b34416" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:12.369696 containerd[1560]: time="2026-01-23T18:50:12.369176781Z" level=info msg="connecting to shim e896471d374fb6108d181b00e321c15590ff25bcfd34c641eb031cf3390b4cc2" address="unix:///run/containerd/s/ca82d51a5a28f9bdece0d0bde67c6dc458bc2bbde350a11e5c4532d8715944a5" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:12.391981 systemd[1]: Started cri-containerd-79e8439126d03d4f42858f3f85c494b6cc78bafc9b245fbc5e9fe4ae0e343a35.scope - libcontainer container 79e8439126d03d4f42858f3f85c494b6cc78bafc9b245fbc5e9fe4ae0e343a35. Jan 23 18:50:12.405200 systemd[1]: Started cri-containerd-1c8da2eb930914dd42161551ae75a9afb5baacf5a39c8bde7576a78aab3d7a12.scope - libcontainer container 1c8da2eb930914dd42161551ae75a9afb5baacf5a39c8bde7576a78aab3d7a12. Jan 23 18:50:12.421909 systemd[1]: Started cri-containerd-e896471d374fb6108d181b00e321c15590ff25bcfd34c641eb031cf3390b4cc2.scope - libcontainer container e896471d374fb6108d181b00e321c15590ff25bcfd34c641eb031cf3390b4cc2. Jan 23 18:50:12.453971 kubelet[2349]: E0123 18:50:12.453478 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.197.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-197-220?timeout=10s\": dial tcp 172.239.197.220:6443: connect: connection refused" interval="800ms" Jan 23 18:50:12.490046 containerd[1560]: time="2026-01-23T18:50:12.489923092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-239-197-220,Uid:c277d3d1589d8dcc5cce8593e1b3b23d,Namespace:kube-system,Attempt:0,} returns sandbox id \"79e8439126d03d4f42858f3f85c494b6cc78bafc9b245fbc5e9fe4ae0e343a35\"" Jan 23 18:50:12.494741 kubelet[2349]: E0123 18:50:12.494695 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:12.506219 containerd[1560]: time="2026-01-23T18:50:12.506083470Z" level=info msg="CreateContainer within sandbox \"79e8439126d03d4f42858f3f85c494b6cc78bafc9b245fbc5e9fe4ae0e343a35\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 18:50:12.510050 containerd[1560]: time="2026-01-23T18:50:12.510011252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-239-197-220,Uid:d8e9edf0b6f04089e2b0758f210cfcf8,Namespace:kube-system,Attempt:0,} returns sandbox id \"e896471d374fb6108d181b00e321c15590ff25bcfd34c641eb031cf3390b4cc2\"" Jan 23 18:50:12.514092 kubelet[2349]: E0123 18:50:12.513498 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:12.518466 containerd[1560]: time="2026-01-23T18:50:12.518440796Z" level=info msg="CreateContainer within sandbox \"e896471d374fb6108d181b00e321c15590ff25bcfd34c641eb031cf3390b4cc2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 18:50:12.525049 containerd[1560]: time="2026-01-23T18:50:12.525000179Z" level=info msg="Container 793cb21e466659dc453e62dbd015e0c055f29bf96a8cf1d7fb76c480bee2a11f: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:50:12.527090 containerd[1560]: time="2026-01-23T18:50:12.527020970Z" level=info msg="Container ec79da361536d7d4127b6380fd09d4f8c896f3aada8ddd37f8db53525ec4f274: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:50:12.528067 containerd[1560]: time="2026-01-23T18:50:12.528041571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-239-197-220,Uid:c34fd9332156ff57ab1ca58fb26c8eec,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c8da2eb930914dd42161551ae75a9afb5baacf5a39c8bde7576a78aab3d7a12\"" Jan 23 18:50:12.528677 kubelet[2349]: E0123 18:50:12.528644 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:12.535203 containerd[1560]: time="2026-01-23T18:50:12.534981324Z" level=info msg="CreateContainer within sandbox \"79e8439126d03d4f42858f3f85c494b6cc78bafc9b245fbc5e9fe4ae0e343a35\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"793cb21e466659dc453e62dbd015e0c055f29bf96a8cf1d7fb76c480bee2a11f\"" Jan 23 18:50:12.536548 containerd[1560]: time="2026-01-23T18:50:12.536486145Z" level=info msg="StartContainer for \"793cb21e466659dc453e62dbd015e0c055f29bf96a8cf1d7fb76c480bee2a11f\"" Jan 23 18:50:12.537211 containerd[1560]: time="2026-01-23T18:50:12.537128195Z" level=info msg="CreateContainer within sandbox \"e896471d374fb6108d181b00e321c15590ff25bcfd34c641eb031cf3390b4cc2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ec79da361536d7d4127b6380fd09d4f8c896f3aada8ddd37f8db53525ec4f274\"" Jan 23 18:50:12.537597 containerd[1560]: time="2026-01-23T18:50:12.537522665Z" level=info msg="CreateContainer within sandbox \"1c8da2eb930914dd42161551ae75a9afb5baacf5a39c8bde7576a78aab3d7a12\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 18:50:12.538645 containerd[1560]: time="2026-01-23T18:50:12.538579226Z" level=info msg="connecting to shim 793cb21e466659dc453e62dbd015e0c055f29bf96a8cf1d7fb76c480bee2a11f" address="unix:///run/containerd/s/c5e36caf6b32cf0c1d46a6cd19b06df7c33cb4f9ccd44dd3c51059964e8687da" protocol=ttrpc version=3 Jan 23 18:50:12.539342 containerd[1560]: time="2026-01-23T18:50:12.539266656Z" level=info msg="StartContainer for \"ec79da361536d7d4127b6380fd09d4f8c896f3aada8ddd37f8db53525ec4f274\"" Jan 23 18:50:12.540732 containerd[1560]: time="2026-01-23T18:50:12.540642257Z" level=info msg="connecting to shim ec79da361536d7d4127b6380fd09d4f8c896f3aada8ddd37f8db53525ec4f274" address="unix:///run/containerd/s/ca82d51a5a28f9bdece0d0bde67c6dc458bc2bbde350a11e5c4532d8715944a5" protocol=ttrpc version=3 Jan 23 18:50:12.547327 containerd[1560]: time="2026-01-23T18:50:12.547307060Z" level=info msg="Container 26eb79ca1be11d3e1260ba12bc870b376914f760b1a02caaafc84f70d84c7a84: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:50:12.556245 containerd[1560]: time="2026-01-23T18:50:12.555716285Z" level=info msg="CreateContainer within sandbox \"1c8da2eb930914dd42161551ae75a9afb5baacf5a39c8bde7576a78aab3d7a12\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"26eb79ca1be11d3e1260ba12bc870b376914f760b1a02caaafc84f70d84c7a84\"" Jan 23 18:50:12.557209 containerd[1560]: time="2026-01-23T18:50:12.557176825Z" level=info msg="StartContainer for \"26eb79ca1be11d3e1260ba12bc870b376914f760b1a02caaafc84f70d84c7a84\"" Jan 23 18:50:12.558651 containerd[1560]: time="2026-01-23T18:50:12.558561186Z" level=info msg="connecting to shim 26eb79ca1be11d3e1260ba12bc870b376914f760b1a02caaafc84f70d84c7a84" address="unix:///run/containerd/s/be5ecec097a1b79147b7f9cb93d0e7fe67f658fa9e1a6e6ab0315837b9b34416" protocol=ttrpc version=3 Jan 23 18:50:12.564935 systemd[1]: Started cri-containerd-ec79da361536d7d4127b6380fd09d4f8c896f3aada8ddd37f8db53525ec4f274.scope - libcontainer container ec79da361536d7d4127b6380fd09d4f8c896f3aada8ddd37f8db53525ec4f274. Jan 23 18:50:12.581896 systemd[1]: Started cri-containerd-793cb21e466659dc453e62dbd015e0c055f29bf96a8cf1d7fb76c480bee2a11f.scope - libcontainer container 793cb21e466659dc453e62dbd015e0c055f29bf96a8cf1d7fb76c480bee2a11f. Jan 23 18:50:12.590419 systemd[1]: Started cri-containerd-26eb79ca1be11d3e1260ba12bc870b376914f760b1a02caaafc84f70d84c7a84.scope - libcontainer container 26eb79ca1be11d3e1260ba12bc870b376914f760b1a02caaafc84f70d84c7a84. Jan 23 18:50:12.627784 kubelet[2349]: I0123 18:50:12.627469 2349 kubelet_node_status.go:75] "Attempting to register node" node="172-239-197-220" Jan 23 18:50:12.627784 kubelet[2349]: E0123 18:50:12.627740 2349 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.197.220:6443/api/v1/nodes\": dial tcp 172.239.197.220:6443: connect: connection refused" node="172-239-197-220" Jan 23 18:50:12.652713 kubelet[2349]: E0123 18:50:12.652653 2349 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.239.197.220:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.239.197.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 18:50:12.673358 containerd[1560]: time="2026-01-23T18:50:12.673296053Z" level=info msg="StartContainer for \"26eb79ca1be11d3e1260ba12bc870b376914f760b1a02caaafc84f70d84c7a84\" returns successfully" Jan 23 18:50:12.697647 containerd[1560]: time="2026-01-23T18:50:12.697608865Z" level=info msg="StartContainer for \"ec79da361536d7d4127b6380fd09d4f8c896f3aada8ddd37f8db53525ec4f274\" returns successfully" Jan 23 18:50:12.703389 containerd[1560]: time="2026-01-23T18:50:12.703360788Z" level=info msg="StartContainer for \"793cb21e466659dc453e62dbd015e0c055f29bf96a8cf1d7fb76c480bee2a11f\" returns successfully" Jan 23 18:50:12.887731 kubelet[2349]: E0123 18:50:12.887640 2349 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-197-220\" not found" node="172-239-197-220" Jan 23 18:50:12.888355 kubelet[2349]: E0123 18:50:12.888296 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:12.889464 kubelet[2349]: E0123 18:50:12.888958 2349 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-197-220\" not found" node="172-239-197-220" Jan 23 18:50:12.889546 kubelet[2349]: E0123 18:50:12.889529 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:12.891364 kubelet[2349]: E0123 18:50:12.891346 2349 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-197-220\" not found" node="172-239-197-220" Jan 23 18:50:12.891454 kubelet[2349]: E0123 18:50:12.891438 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:13.429437 kubelet[2349]: I0123 18:50:13.429406 2349 kubelet_node_status.go:75] "Attempting to register node" node="172-239-197-220" Jan 23 18:50:13.893261 kubelet[2349]: E0123 18:50:13.893232 2349 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-197-220\" not found" node="172-239-197-220" Jan 23 18:50:13.893628 kubelet[2349]: E0123 18:50:13.893362 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:13.896207 kubelet[2349]: E0123 18:50:13.896118 2349 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-197-220\" not found" node="172-239-197-220" Jan 23 18:50:13.896341 kubelet[2349]: E0123 18:50:13.896327 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:14.165543 kubelet[2349]: E0123 18:50:14.165403 2349 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-239-197-220\" not found" node="172-239-197-220" Jan 23 18:50:14.229794 kubelet[2349]: I0123 18:50:14.228757 2349 kubelet_node_status.go:78] "Successfully registered node" node="172-239-197-220" Jan 23 18:50:14.249054 kubelet[2349]: I0123 18:50:14.248901 2349 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-197-220" Jan 23 18:50:14.273817 kubelet[2349]: E0123 18:50:14.273649 2349 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-239-197-220\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-239-197-220" Jan 23 18:50:14.273817 kubelet[2349]: I0123 18:50:14.273676 2349 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-197-220" Jan 23 18:50:14.277808 kubelet[2349]: E0123 18:50:14.277787 2349 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-197-220\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-239-197-220" Jan 23 18:50:14.277808 kubelet[2349]: I0123 18:50:14.277806 2349 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-197-220" Jan 23 18:50:14.279714 kubelet[2349]: E0123 18:50:14.279694 2349 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-239-197-220\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-239-197-220" Jan 23 18:50:14.826916 kubelet[2349]: I0123 18:50:14.826874 2349 apiserver.go:52] "Watching apiserver" Jan 23 18:50:14.849734 kubelet[2349]: I0123 18:50:14.849682 2349 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 18:50:15.365142 kubelet[2349]: I0123 18:50:15.365094 2349 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-197-220" Jan 23 18:50:15.370020 kubelet[2349]: E0123 18:50:15.369980 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:15.896912 kubelet[2349]: E0123 18:50:15.896826 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:16.349083 systemd[1]: Reload requested from client PID 2624 ('systemctl') (unit session-7.scope)... Jan 23 18:50:16.349103 systemd[1]: Reloading... Jan 23 18:50:16.454808 zram_generator::config[2664]: No configuration found. Jan 23 18:50:16.722163 systemd[1]: Reloading finished in 372 ms. Jan 23 18:50:16.756332 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:50:16.773793 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 18:50:16.774266 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:50:16.774361 systemd[1]: kubelet.service: Consumed 887ms CPU time, 129.9M memory peak. Jan 23 18:50:16.777272 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:50:16.987467 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:50:16.998240 (kubelet)[2719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 18:50:17.046634 kubelet[2719]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:50:17.046634 kubelet[2719]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 18:50:17.046634 kubelet[2719]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:50:17.046634 kubelet[2719]: I0123 18:50:17.046311 2719 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:50:17.059134 kubelet[2719]: I0123 18:50:17.059097 2719 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 18:50:17.059256 kubelet[2719]: I0123 18:50:17.059245 2719 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:50:17.059527 kubelet[2719]: I0123 18:50:17.059514 2719 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 18:50:17.061160 kubelet[2719]: I0123 18:50:17.061139 2719 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 18:50:17.064277 kubelet[2719]: I0123 18:50:17.063845 2719 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 18:50:17.068933 kubelet[2719]: I0123 18:50:17.068911 2719 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:50:17.073655 kubelet[2719]: I0123 18:50:17.073634 2719 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 18:50:17.073963 kubelet[2719]: I0123 18:50:17.073942 2719 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:50:17.074104 kubelet[2719]: I0123 18:50:17.073964 2719 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-239-197-220","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:50:17.074207 kubelet[2719]: I0123 18:50:17.074107 2719 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:50:17.074207 kubelet[2719]: I0123 18:50:17.074116 2719 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 18:50:17.074207 kubelet[2719]: I0123 18:50:17.074162 2719 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:50:17.074365 kubelet[2719]: I0123 18:50:17.074349 2719 kubelet.go:480] "Attempting to sync node with API server" Jan 23 18:50:17.074396 kubelet[2719]: I0123 18:50:17.074366 2719 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:50:17.075024 kubelet[2719]: I0123 18:50:17.074997 2719 kubelet.go:386] "Adding apiserver pod source" Jan 23 18:50:17.075071 kubelet[2719]: I0123 18:50:17.075034 2719 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:50:17.078279 kubelet[2719]: I0123 18:50:17.076338 2719 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 18:50:17.078279 kubelet[2719]: I0123 18:50:17.077214 2719 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 18:50:17.083749 kubelet[2719]: I0123 18:50:17.083717 2719 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 18:50:17.084006 kubelet[2719]: I0123 18:50:17.083987 2719 server.go:1289] "Started kubelet" Jan 23 18:50:17.088596 kubelet[2719]: I0123 18:50:17.088576 2719 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:50:17.103496 kubelet[2719]: I0123 18:50:17.103434 2719 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:50:17.105291 kubelet[2719]: I0123 18:50:17.105275 2719 server.go:317] "Adding debug handlers to kubelet server" Jan 23 18:50:17.108558 kubelet[2719]: E0123 18:50:17.107841 2719 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 18:50:17.108558 kubelet[2719]: I0123 18:50:17.107991 2719 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:50:17.108558 kubelet[2719]: I0123 18:50:17.108313 2719 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:50:17.109290 kubelet[2719]: I0123 18:50:17.109266 2719 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 18:50:17.110355 kubelet[2719]: I0123 18:50:17.109842 2719 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 18:50:17.112574 kubelet[2719]: I0123 18:50:17.112560 2719 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 18:50:17.112920 kubelet[2719]: I0123 18:50:17.112906 2719 reconciler.go:26] "Reconciler: start to sync state" Jan 23 18:50:17.116089 kubelet[2719]: I0123 18:50:17.116061 2719 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 18:50:17.118105 kubelet[2719]: I0123 18:50:17.118065 2719 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 18:50:17.118221 kubelet[2719]: I0123 18:50:17.118211 2719 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 18:50:17.118317 kubelet[2719]: I0123 18:50:17.118306 2719 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 18:50:17.118388 kubelet[2719]: I0123 18:50:17.118379 2719 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 18:50:17.118578 kubelet[2719]: E0123 18:50:17.118561 2719 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 18:50:17.119038 kubelet[2719]: I0123 18:50:17.119013 2719 factory.go:223] Registration of the systemd container factory successfully Jan 23 18:50:17.119157 kubelet[2719]: I0123 18:50:17.119125 2719 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 18:50:17.122306 kubelet[2719]: I0123 18:50:17.122285 2719 factory.go:223] Registration of the containerd container factory successfully Jan 23 18:50:17.184020 kubelet[2719]: I0123 18:50:17.183992 2719 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 18:50:17.184288 kubelet[2719]: I0123 18:50:17.184272 2719 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 18:50:17.184373 kubelet[2719]: I0123 18:50:17.184360 2719 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:50:17.184586 kubelet[2719]: I0123 18:50:17.184572 2719 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 18:50:17.184669 kubelet[2719]: I0123 18:50:17.184645 2719 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 18:50:17.184713 kubelet[2719]: I0123 18:50:17.184706 2719 policy_none.go:49] "None policy: Start" Jan 23 18:50:17.184796 kubelet[2719]: I0123 18:50:17.184786 2719 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 18:50:17.184853 kubelet[2719]: I0123 18:50:17.184845 2719 state_mem.go:35] "Initializing new in-memory state store" Jan 23 18:50:17.184988 kubelet[2719]: I0123 18:50:17.184978 2719 state_mem.go:75] "Updated machine memory state" Jan 23 18:50:17.190037 kubelet[2719]: E0123 18:50:17.190017 2719 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 18:50:17.192584 kubelet[2719]: I0123 18:50:17.191820 2719 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:50:17.192584 kubelet[2719]: I0123 18:50:17.191835 2719 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:50:17.194246 kubelet[2719]: I0123 18:50:17.194228 2719 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:50:17.194620 kubelet[2719]: E0123 18:50:17.194346 2719 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 18:50:17.219479 kubelet[2719]: I0123 18:50:17.219437 2719 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-197-220" Jan 23 18:50:17.221277 kubelet[2719]: I0123 18:50:17.220047 2719 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-197-220" Jan 23 18:50:17.221277 kubelet[2719]: I0123 18:50:17.220109 2719 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-197-220" Jan 23 18:50:17.227018 kubelet[2719]: E0123 18:50:17.226904 2719 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-239-197-220\" already exists" pod="kube-system/kube-scheduler-172-239-197-220" Jan 23 18:50:17.305807 kubelet[2719]: I0123 18:50:17.304271 2719 kubelet_node_status.go:75] "Attempting to register node" node="172-239-197-220" Jan 23 18:50:17.312065 kubelet[2719]: I0123 18:50:17.312017 2719 kubelet_node_status.go:124] "Node was previously registered" node="172-239-197-220" Jan 23 18:50:17.312246 kubelet[2719]: I0123 18:50:17.312097 2719 kubelet_node_status.go:78] "Successfully registered node" node="172-239-197-220" Jan 23 18:50:17.314092 kubelet[2719]: I0123 18:50:17.314008 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d8e9edf0b6f04089e2b0758f210cfcf8-ca-certs\") pod \"kube-controller-manager-172-239-197-220\" (UID: \"d8e9edf0b6f04089e2b0758f210cfcf8\") " pod="kube-system/kube-controller-manager-172-239-197-220" Jan 23 18:50:17.314092 kubelet[2719]: I0123 18:50:17.314033 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d8e9edf0b6f04089e2b0758f210cfcf8-flexvolume-dir\") pod \"kube-controller-manager-172-239-197-220\" (UID: \"d8e9edf0b6f04089e2b0758f210cfcf8\") " pod="kube-system/kube-controller-manager-172-239-197-220" Jan 23 18:50:17.314092 kubelet[2719]: I0123 18:50:17.314052 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d8e9edf0b6f04089e2b0758f210cfcf8-k8s-certs\") pod \"kube-controller-manager-172-239-197-220\" (UID: \"d8e9edf0b6f04089e2b0758f210cfcf8\") " pod="kube-system/kube-controller-manager-172-239-197-220" Jan 23 18:50:17.314092 kubelet[2719]: I0123 18:50:17.314067 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c34fd9332156ff57ab1ca58fb26c8eec-k8s-certs\") pod \"kube-apiserver-172-239-197-220\" (UID: \"c34fd9332156ff57ab1ca58fb26c8eec\") " pod="kube-system/kube-apiserver-172-239-197-220" Jan 23 18:50:17.314242 kubelet[2719]: I0123 18:50:17.314174 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d8e9edf0b6f04089e2b0758f210cfcf8-kubeconfig\") pod \"kube-controller-manager-172-239-197-220\" (UID: \"d8e9edf0b6f04089e2b0758f210cfcf8\") " pod="kube-system/kube-controller-manager-172-239-197-220" Jan 23 18:50:17.314242 kubelet[2719]: I0123 18:50:17.314193 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d8e9edf0b6f04089e2b0758f210cfcf8-usr-share-ca-certificates\") pod \"kube-controller-manager-172-239-197-220\" (UID: \"d8e9edf0b6f04089e2b0758f210cfcf8\") " pod="kube-system/kube-controller-manager-172-239-197-220" Jan 23 18:50:17.314242 kubelet[2719]: I0123 18:50:17.314208 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c277d3d1589d8dcc5cce8593e1b3b23d-kubeconfig\") pod \"kube-scheduler-172-239-197-220\" (UID: \"c277d3d1589d8dcc5cce8593e1b3b23d\") " pod="kube-system/kube-scheduler-172-239-197-220" Jan 23 18:50:17.314242 kubelet[2719]: I0123 18:50:17.314222 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c34fd9332156ff57ab1ca58fb26c8eec-ca-certs\") pod \"kube-apiserver-172-239-197-220\" (UID: \"c34fd9332156ff57ab1ca58fb26c8eec\") " pod="kube-system/kube-apiserver-172-239-197-220" Jan 23 18:50:17.314347 kubelet[2719]: I0123 18:50:17.314264 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c34fd9332156ff57ab1ca58fb26c8eec-usr-share-ca-certificates\") pod \"kube-apiserver-172-239-197-220\" (UID: \"c34fd9332156ff57ab1ca58fb26c8eec\") " pod="kube-system/kube-apiserver-172-239-197-220" Jan 23 18:50:17.528105 kubelet[2719]: E0123 18:50:17.528031 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:17.528945 kubelet[2719]: E0123 18:50:17.528900 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:17.530018 kubelet[2719]: E0123 18:50:17.529982 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:18.086273 kubelet[2719]: I0123 18:50:18.086214 2719 apiserver.go:52] "Watching apiserver" Jan 23 18:50:18.112868 kubelet[2719]: I0123 18:50:18.112819 2719 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 18:50:18.166717 kubelet[2719]: E0123 18:50:18.166680 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:18.167119 kubelet[2719]: I0123 18:50:18.167104 2719 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-197-220" Jan 23 18:50:18.167882 kubelet[2719]: E0123 18:50:18.167864 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:18.179220 kubelet[2719]: E0123 18:50:18.179175 2719 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-197-220\" already exists" pod="kube-system/kube-apiserver-172-239-197-220" Jan 23 18:50:18.179405 kubelet[2719]: E0123 18:50:18.179374 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:18.208891 kubelet[2719]: I0123 18:50:18.208845 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-239-197-220" podStartSLOduration=1.208829759 podStartE2EDuration="1.208829759s" podCreationTimestamp="2026-01-23 18:50:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:50:18.197847794 +0000 UTC m=+1.193681998" watchObservedRunningTime="2026-01-23 18:50:18.208829759 +0000 UTC m=+1.204663973" Jan 23 18:50:18.209321 kubelet[2719]: I0123 18:50:18.209010 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-239-197-220" podStartSLOduration=3.209006429 podStartE2EDuration="3.209006429s" podCreationTimestamp="2026-01-23 18:50:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:50:18.208614229 +0000 UTC m=+1.204448433" watchObservedRunningTime="2026-01-23 18:50:18.209006429 +0000 UTC m=+1.204840633" Jan 23 18:50:18.232039 kubelet[2719]: I0123 18:50:18.231905 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-239-197-220" podStartSLOduration=1.231897231 podStartE2EDuration="1.231897231s" podCreationTimestamp="2026-01-23 18:50:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:50:18.223100216 +0000 UTC m=+1.218934420" watchObservedRunningTime="2026-01-23 18:50:18.231897231 +0000 UTC m=+1.227731435" Jan 23 18:50:19.168521 kubelet[2719]: E0123 18:50:19.168202 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:19.168521 kubelet[2719]: E0123 18:50:19.168202 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:20.169658 kubelet[2719]: E0123 18:50:20.169598 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:20.281400 kubelet[2719]: E0123 18:50:20.279628 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:20.959478 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 18:50:22.623895 kubelet[2719]: I0123 18:50:22.623860 2719 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 18:50:22.624837 kubelet[2719]: I0123 18:50:22.624401 2719 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 18:50:22.624930 containerd[1560]: time="2026-01-23T18:50:22.624230015Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 18:50:23.491191 systemd[1]: Created slice kubepods-besteffort-podc430ddca_410d_467c_b827_6aaa9a20d41e.slice - libcontainer container kubepods-besteffort-podc430ddca_410d_467c_b827_6aaa9a20d41e.slice. Jan 23 18:50:23.555224 kubelet[2719]: I0123 18:50:23.555193 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c430ddca-410d-467c-b827-6aaa9a20d41e-lib-modules\") pod \"kube-proxy-4fzh7\" (UID: \"c430ddca-410d-467c-b827-6aaa9a20d41e\") " pod="kube-system/kube-proxy-4fzh7" Jan 23 18:50:23.555224 kubelet[2719]: I0123 18:50:23.555222 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c430ddca-410d-467c-b827-6aaa9a20d41e-kube-proxy\") pod \"kube-proxy-4fzh7\" (UID: \"c430ddca-410d-467c-b827-6aaa9a20d41e\") " pod="kube-system/kube-proxy-4fzh7" Jan 23 18:50:23.555402 kubelet[2719]: I0123 18:50:23.555241 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c430ddca-410d-467c-b827-6aaa9a20d41e-xtables-lock\") pod \"kube-proxy-4fzh7\" (UID: \"c430ddca-410d-467c-b827-6aaa9a20d41e\") " pod="kube-system/kube-proxy-4fzh7" Jan 23 18:50:23.555402 kubelet[2719]: I0123 18:50:23.555256 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bctdf\" (UniqueName: \"kubernetes.io/projected/c430ddca-410d-467c-b827-6aaa9a20d41e-kube-api-access-bctdf\") pod \"kube-proxy-4fzh7\" (UID: \"c430ddca-410d-467c-b827-6aaa9a20d41e\") " pod="kube-system/kube-proxy-4fzh7" Jan 23 18:50:23.654378 systemd[1]: Created slice kubepods-besteffort-pod139be7f9_4b03_4363_83cb_30ac61a31f5c.slice - libcontainer container kubepods-besteffort-pod139be7f9_4b03_4363_83cb_30ac61a31f5c.slice. Jan 23 18:50:23.655540 kubelet[2719]: I0123 18:50:23.655497 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5mfw\" (UniqueName: \"kubernetes.io/projected/139be7f9-4b03-4363-83cb-30ac61a31f5c-kube-api-access-c5mfw\") pod \"tigera-operator-7dcd859c48-jgb6m\" (UID: \"139be7f9-4b03-4363-83cb-30ac61a31f5c\") " pod="tigera-operator/tigera-operator-7dcd859c48-jgb6m" Jan 23 18:50:23.655870 kubelet[2719]: I0123 18:50:23.655584 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/139be7f9-4b03-4363-83cb-30ac61a31f5c-var-lib-calico\") pod \"tigera-operator-7dcd859c48-jgb6m\" (UID: \"139be7f9-4b03-4363-83cb-30ac61a31f5c\") " pod="tigera-operator/tigera-operator-7dcd859c48-jgb6m" Jan 23 18:50:23.805508 kubelet[2719]: E0123 18:50:23.805390 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:23.806821 containerd[1560]: time="2026-01-23T18:50:23.806761701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4fzh7,Uid:c430ddca-410d-467c-b827-6aaa9a20d41e,Namespace:kube-system,Attempt:0,}" Jan 23 18:50:23.823797 containerd[1560]: time="2026-01-23T18:50:23.823191954Z" level=info msg="connecting to shim 33edbc02b6b2c15637ee4960ff25e76e78744f1bb782ac5d4bc57f5abcaf530a" address="unix:///run/containerd/s/6ce7e4e85c6ceb301c901250714efca1313c04749088b644f792dbb84f17647b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:23.851912 systemd[1]: Started cri-containerd-33edbc02b6b2c15637ee4960ff25e76e78744f1bb782ac5d4bc57f5abcaf530a.scope - libcontainer container 33edbc02b6b2c15637ee4960ff25e76e78744f1bb782ac5d4bc57f5abcaf530a. Jan 23 18:50:23.888857 containerd[1560]: time="2026-01-23T18:50:23.888756650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4fzh7,Uid:c430ddca-410d-467c-b827-6aaa9a20d41e,Namespace:kube-system,Attempt:0,} returns sandbox id \"33edbc02b6b2c15637ee4960ff25e76e78744f1bb782ac5d4bc57f5abcaf530a\"" Jan 23 18:50:23.890159 kubelet[2719]: E0123 18:50:23.890134 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:23.895993 containerd[1560]: time="2026-01-23T18:50:23.895737279Z" level=info msg="CreateContainer within sandbox \"33edbc02b6b2c15637ee4960ff25e76e78744f1bb782ac5d4bc57f5abcaf530a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 18:50:23.915647 containerd[1560]: time="2026-01-23T18:50:23.915598430Z" level=info msg="Container d1e9ee4ecfd533fd64abe85b87d0db64e72203c0f898115fde7a49ad8bc5e818: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:50:23.924255 containerd[1560]: time="2026-01-23T18:50:23.924211185Z" level=info msg="CreateContainer within sandbox \"33edbc02b6b2c15637ee4960ff25e76e78744f1bb782ac5d4bc57f5abcaf530a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d1e9ee4ecfd533fd64abe85b87d0db64e72203c0f898115fde7a49ad8bc5e818\"" Jan 23 18:50:23.926811 containerd[1560]: time="2026-01-23T18:50:23.925981985Z" level=info msg="StartContainer for \"d1e9ee4ecfd533fd64abe85b87d0db64e72203c0f898115fde7a49ad8bc5e818\"" Jan 23 18:50:23.927263 containerd[1560]: time="2026-01-23T18:50:23.927238903Z" level=info msg="connecting to shim d1e9ee4ecfd533fd64abe85b87d0db64e72203c0f898115fde7a49ad8bc5e818" address="unix:///run/containerd/s/6ce7e4e85c6ceb301c901250714efca1313c04749088b644f792dbb84f17647b" protocol=ttrpc version=3 Jan 23 18:50:23.951968 systemd[1]: Started cri-containerd-d1e9ee4ecfd533fd64abe85b87d0db64e72203c0f898115fde7a49ad8bc5e818.scope - libcontainer container d1e9ee4ecfd533fd64abe85b87d0db64e72203c0f898115fde7a49ad8bc5e818. Jan 23 18:50:23.964058 containerd[1560]: time="2026-01-23T18:50:23.964014527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-jgb6m,Uid:139be7f9-4b03-4363-83cb-30ac61a31f5c,Namespace:tigera-operator,Attempt:0,}" Jan 23 18:50:23.986799 containerd[1560]: time="2026-01-23T18:50:23.986695871Z" level=info msg="connecting to shim dc39f641a9398a3872c523dd1feb393406a44b0ab2bf85b6721b92ef96a89124" address="unix:///run/containerd/s/89f9ed75a818ee06a7237186bc9c3c7e09e4a86ef0d5bad03e6929d1bff343c5" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:24.021150 systemd[1]: Started cri-containerd-dc39f641a9398a3872c523dd1feb393406a44b0ab2bf85b6721b92ef96a89124.scope - libcontainer container dc39f641a9398a3872c523dd1feb393406a44b0ab2bf85b6721b92ef96a89124. Jan 23 18:50:24.067697 containerd[1560]: time="2026-01-23T18:50:24.067572703Z" level=info msg="StartContainer for \"d1e9ee4ecfd533fd64abe85b87d0db64e72203c0f898115fde7a49ad8bc5e818\" returns successfully" Jan 23 18:50:24.109870 containerd[1560]: time="2026-01-23T18:50:24.109816063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-jgb6m,Uid:139be7f9-4b03-4363-83cb-30ac61a31f5c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"dc39f641a9398a3872c523dd1feb393406a44b0ab2bf85b6721b92ef96a89124\"" Jan 23 18:50:24.112069 containerd[1560]: time="2026-01-23T18:50:24.111929658Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 18:50:24.180350 kubelet[2719]: E0123 18:50:24.180103 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:24.190513 kubelet[2719]: I0123 18:50:24.190371 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4fzh7" podStartSLOduration=1.1903562380000001 podStartE2EDuration="1.190356238s" podCreationTimestamp="2026-01-23 18:50:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:50:24.190068872 +0000 UTC m=+7.185903076" watchObservedRunningTime="2026-01-23 18:50:24.190356238 +0000 UTC m=+7.186190442" Jan 23 18:50:25.184610 kubelet[2719]: E0123 18:50:25.184567 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:25.235233 containerd[1560]: time="2026-01-23T18:50:25.235192945Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:25.239142 containerd[1560]: time="2026-01-23T18:50:25.238920539Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 23 18:50:25.239388 containerd[1560]: time="2026-01-23T18:50:25.239355508Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:25.242850 containerd[1560]: time="2026-01-23T18:50:25.242820797Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:25.243745 containerd[1560]: time="2026-01-23T18:50:25.243722255Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.131765726s" Jan 23 18:50:25.243838 containerd[1560]: time="2026-01-23T18:50:25.243823257Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 23 18:50:25.247124 containerd[1560]: time="2026-01-23T18:50:25.247100873Z" level=info msg="CreateContainer within sandbox \"dc39f641a9398a3872c523dd1feb393406a44b0ab2bf85b6721b92ef96a89124\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 18:50:25.253855 containerd[1560]: time="2026-01-23T18:50:25.252180794Z" level=info msg="Container 8be32c4c70831badd722a4f5f4ef56d539be52d373c1270144538d1f417e2aff: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:50:25.269195 containerd[1560]: time="2026-01-23T18:50:25.269147483Z" level=info msg="CreateContainer within sandbox \"dc39f641a9398a3872c523dd1feb393406a44b0ab2bf85b6721b92ef96a89124\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8be32c4c70831badd722a4f5f4ef56d539be52d373c1270144538d1f417e2aff\"" Jan 23 18:50:25.270051 containerd[1560]: time="2026-01-23T18:50:25.270001670Z" level=info msg="StartContainer for \"8be32c4c70831badd722a4f5f4ef56d539be52d373c1270144538d1f417e2aff\"" Jan 23 18:50:25.271613 containerd[1560]: time="2026-01-23T18:50:25.271569362Z" level=info msg="connecting to shim 8be32c4c70831badd722a4f5f4ef56d539be52d373c1270144538d1f417e2aff" address="unix:///run/containerd/s/89f9ed75a818ee06a7237186bc9c3c7e09e4a86ef0d5bad03e6929d1bff343c5" protocol=ttrpc version=3 Jan 23 18:50:25.295960 systemd[1]: Started cri-containerd-8be32c4c70831badd722a4f5f4ef56d539be52d373c1270144538d1f417e2aff.scope - libcontainer container 8be32c4c70831badd722a4f5f4ef56d539be52d373c1270144538d1f417e2aff. Jan 23 18:50:25.332651 containerd[1560]: time="2026-01-23T18:50:25.332604132Z" level=info msg="StartContainer for \"8be32c4c70831badd722a4f5f4ef56d539be52d373c1270144538d1f417e2aff\" returns successfully" Jan 23 18:50:25.759322 kubelet[2719]: E0123 18:50:25.759105 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:26.187876 kubelet[2719]: E0123 18:50:26.187025 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:26.203142 kubelet[2719]: I0123 18:50:26.202972 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-jgb6m" podStartSLOduration=2.069243675 podStartE2EDuration="3.202958582s" podCreationTimestamp="2026-01-23 18:50:23 +0000 UTC" firstStartedPulling="2026-01-23 18:50:24.110895796 +0000 UTC m=+7.106730000" lastFinishedPulling="2026-01-23 18:50:25.244610703 +0000 UTC m=+8.240444907" observedRunningTime="2026-01-23 18:50:26.202104076 +0000 UTC m=+9.197938330" watchObservedRunningTime="2026-01-23 18:50:26.202958582 +0000 UTC m=+9.198792786" Jan 23 18:50:27.191683 kubelet[2719]: E0123 18:50:27.191336 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:28.241178 kubelet[2719]: E0123 18:50:28.241118 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:30.284015 kubelet[2719]: E0123 18:50:30.283970 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:31.140459 sudo[1793]: pam_unix(sudo:session): session closed for user root Jan 23 18:50:31.165921 sshd[1792]: Connection closed by 68.220.241.50 port 46920 Jan 23 18:50:31.169994 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Jan 23 18:50:31.175584 systemd[1]: sshd@6-172.239.197.220:22-68.220.241.50:46920.service: Deactivated successfully. Jan 23 18:50:31.180553 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 18:50:31.181393 systemd[1]: session-7.scope: Consumed 5.204s CPU time, 232.1M memory peak. Jan 23 18:50:31.186961 systemd-logind[1531]: Session 7 logged out. Waiting for processes to exit. Jan 23 18:50:31.192036 systemd-logind[1531]: Removed session 7. Jan 23 18:50:35.602256 update_engine[1536]: I20260123 18:50:35.602160 1536 update_attempter.cc:509] Updating boot flags... Jan 23 18:50:36.048344 systemd[1]: Created slice kubepods-besteffort-podff7a5a97_b367_4478_bc18_c8420f19493a.slice - libcontainer container kubepods-besteffort-podff7a5a97_b367_4478_bc18_c8420f19493a.slice. Jan 23 18:50:36.148063 kubelet[2719]: I0123 18:50:36.147994 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fldvd\" (UniqueName: \"kubernetes.io/projected/ff7a5a97-b367-4478-bc18-c8420f19493a-kube-api-access-fldvd\") pod \"calico-typha-f9bbc79bd-9cqh9\" (UID: \"ff7a5a97-b367-4478-bc18-c8420f19493a\") " pod="calico-system/calico-typha-f9bbc79bd-9cqh9" Jan 23 18:50:36.148063 kubelet[2719]: I0123 18:50:36.148057 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ff7a5a97-b367-4478-bc18-c8420f19493a-typha-certs\") pod \"calico-typha-f9bbc79bd-9cqh9\" (UID: \"ff7a5a97-b367-4478-bc18-c8420f19493a\") " pod="calico-system/calico-typha-f9bbc79bd-9cqh9" Jan 23 18:50:36.148063 kubelet[2719]: I0123 18:50:36.148084 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff7a5a97-b367-4478-bc18-c8420f19493a-tigera-ca-bundle\") pod \"calico-typha-f9bbc79bd-9cqh9\" (UID: \"ff7a5a97-b367-4478-bc18-c8420f19493a\") " pod="calico-system/calico-typha-f9bbc79bd-9cqh9" Jan 23 18:50:36.179690 systemd[1]: Created slice kubepods-besteffort-pod5a20f060_59c3_4a50_bef9_d2477226b041.slice - libcontainer container kubepods-besteffort-pod5a20f060_59c3_4a50_bef9_d2477226b041.slice. Jan 23 18:50:36.249998 kubelet[2719]: I0123 18:50:36.249921 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a20f060-59c3-4a50-bef9-d2477226b041-lib-modules\") pod \"calico-node-7qvhr\" (UID: \"5a20f060-59c3-4a50-bef9-d2477226b041\") " pod="calico-system/calico-node-7qvhr" Jan 23 18:50:36.249998 kubelet[2719]: I0123 18:50:36.250004 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnprs\" (UniqueName: \"kubernetes.io/projected/5a20f060-59c3-4a50-bef9-d2477226b041-kube-api-access-qnprs\") pod \"calico-node-7qvhr\" (UID: \"5a20f060-59c3-4a50-bef9-d2477226b041\") " pod="calico-system/calico-node-7qvhr" Jan 23 18:50:36.250251 kubelet[2719]: I0123 18:50:36.250038 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5a20f060-59c3-4a50-bef9-d2477226b041-cni-net-dir\") pod \"calico-node-7qvhr\" (UID: \"5a20f060-59c3-4a50-bef9-d2477226b041\") " pod="calico-system/calico-node-7qvhr" Jan 23 18:50:36.250251 kubelet[2719]: I0123 18:50:36.250060 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5a20f060-59c3-4a50-bef9-d2477226b041-flexvol-driver-host\") pod \"calico-node-7qvhr\" (UID: \"5a20f060-59c3-4a50-bef9-d2477226b041\") " pod="calico-system/calico-node-7qvhr" Jan 23 18:50:36.250251 kubelet[2719]: I0123 18:50:36.250075 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a20f060-59c3-4a50-bef9-d2477226b041-tigera-ca-bundle\") pod \"calico-node-7qvhr\" (UID: \"5a20f060-59c3-4a50-bef9-d2477226b041\") " pod="calico-system/calico-node-7qvhr" Jan 23 18:50:36.250251 kubelet[2719]: I0123 18:50:36.250096 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5a20f060-59c3-4a50-bef9-d2477226b041-var-run-calico\") pod \"calico-node-7qvhr\" (UID: \"5a20f060-59c3-4a50-bef9-d2477226b041\") " pod="calico-system/calico-node-7qvhr" Jan 23 18:50:36.250251 kubelet[2719]: I0123 18:50:36.250114 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a20f060-59c3-4a50-bef9-d2477226b041-xtables-lock\") pod \"calico-node-7qvhr\" (UID: \"5a20f060-59c3-4a50-bef9-d2477226b041\") " pod="calico-system/calico-node-7qvhr" Jan 23 18:50:36.250482 kubelet[2719]: I0123 18:50:36.250143 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5a20f060-59c3-4a50-bef9-d2477226b041-policysync\") pod \"calico-node-7qvhr\" (UID: \"5a20f060-59c3-4a50-bef9-d2477226b041\") " pod="calico-system/calico-node-7qvhr" Jan 23 18:50:36.250482 kubelet[2719]: I0123 18:50:36.250173 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5a20f060-59c3-4a50-bef9-d2477226b041-cni-log-dir\") pod \"calico-node-7qvhr\" (UID: \"5a20f060-59c3-4a50-bef9-d2477226b041\") " pod="calico-system/calico-node-7qvhr" Jan 23 18:50:36.250482 kubelet[2719]: I0123 18:50:36.250201 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5a20f060-59c3-4a50-bef9-d2477226b041-node-certs\") pod \"calico-node-7qvhr\" (UID: \"5a20f060-59c3-4a50-bef9-d2477226b041\") " pod="calico-system/calico-node-7qvhr" Jan 23 18:50:36.250482 kubelet[2719]: I0123 18:50:36.250229 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5a20f060-59c3-4a50-bef9-d2477226b041-cni-bin-dir\") pod \"calico-node-7qvhr\" (UID: \"5a20f060-59c3-4a50-bef9-d2477226b041\") " pod="calico-system/calico-node-7qvhr" Jan 23 18:50:36.250482 kubelet[2719]: I0123 18:50:36.250253 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5a20f060-59c3-4a50-bef9-d2477226b041-var-lib-calico\") pod \"calico-node-7qvhr\" (UID: \"5a20f060-59c3-4a50-bef9-d2477226b041\") " pod="calico-system/calico-node-7qvhr" Jan 23 18:50:36.353179 kubelet[2719]: E0123 18:50:36.353027 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:36.354049 containerd[1560]: time="2026-01-23T18:50:36.353999819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f9bbc79bd-9cqh9,Uid:ff7a5a97-b367-4478-bc18-c8420f19493a,Namespace:calico-system,Attempt:0,}" Jan 23 18:50:36.354638 kubelet[2719]: E0123 18:50:36.354608 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.354638 kubelet[2719]: W0123 18:50:36.354631 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.354712 kubelet[2719]: E0123 18:50:36.354664 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.355411 kubelet[2719]: E0123 18:50:36.355309 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.355411 kubelet[2719]: W0123 18:50:36.355392 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.355602 kubelet[2719]: E0123 18:50:36.355526 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.357003 kubelet[2719]: E0123 18:50:36.356980 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.357003 kubelet[2719]: W0123 18:50:36.356998 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.357854 kubelet[2719]: E0123 18:50:36.357011 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.357854 kubelet[2719]: E0123 18:50:36.357858 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.357854 kubelet[2719]: W0123 18:50:36.357869 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.358030 kubelet[2719]: E0123 18:50:36.357880 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.358562 kubelet[2719]: E0123 18:50:36.358483 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.358562 kubelet[2719]: W0123 18:50:36.358502 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.358562 kubelet[2719]: E0123 18:50:36.358512 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.360343 kubelet[2719]: E0123 18:50:36.359353 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.360343 kubelet[2719]: W0123 18:50:36.359373 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.360343 kubelet[2719]: E0123 18:50:36.359422 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.360343 kubelet[2719]: E0123 18:50:36.359996 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.360343 kubelet[2719]: W0123 18:50:36.360005 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.360343 kubelet[2719]: E0123 18:50:36.360015 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.360924 kubelet[2719]: E0123 18:50:36.360853 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.360924 kubelet[2719]: W0123 18:50:36.360885 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.361091 kubelet[2719]: E0123 18:50:36.361014 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.361601 kubelet[2719]: E0123 18:50:36.361562 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.361601 kubelet[2719]: W0123 18:50:36.361590 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.361678 kubelet[2719]: E0123 18:50:36.361658 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.362190 kubelet[2719]: E0123 18:50:36.362162 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.362190 kubelet[2719]: W0123 18:50:36.362181 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.362454 kubelet[2719]: E0123 18:50:36.362422 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.362846 kubelet[2719]: E0123 18:50:36.362821 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.362846 kubelet[2719]: W0123 18:50:36.362839 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.362846 kubelet[2719]: E0123 18:50:36.362849 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.363270 kubelet[2719]: E0123 18:50:36.363215 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.363270 kubelet[2719]: W0123 18:50:36.363233 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.363862 kubelet[2719]: E0123 18:50:36.363638 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.366118 kubelet[2719]: E0123 18:50:36.366034 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.366118 kubelet[2719]: W0123 18:50:36.366061 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.366118 kubelet[2719]: E0123 18:50:36.366078 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.369119 kubelet[2719]: E0123 18:50:36.368623 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.369119 kubelet[2719]: W0123 18:50:36.368641 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.369119 kubelet[2719]: E0123 18:50:36.368653 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.373803 kubelet[2719]: E0123 18:50:36.371839 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.373803 kubelet[2719]: W0123 18:50:36.372660 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.373803 kubelet[2719]: E0123 18:50:36.372677 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.376102 kubelet[2719]: E0123 18:50:36.376063 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.376102 kubelet[2719]: W0123 18:50:36.376083 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.376102 kubelet[2719]: E0123 18:50:36.376094 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.377259 kubelet[2719]: E0123 18:50:36.376750 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.377259 kubelet[2719]: W0123 18:50:36.376975 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.377259 kubelet[2719]: E0123 18:50:36.376989 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.379107 kubelet[2719]: E0123 18:50:36.378971 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.379107 kubelet[2719]: W0123 18:50:36.379054 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.382217 kubelet[2719]: E0123 18:50:36.381873 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.382426 kubelet[2719]: E0123 18:50:36.382373 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.382426 kubelet[2719]: W0123 18:50:36.382391 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.382426 kubelet[2719]: E0123 18:50:36.382404 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.383497 kubelet[2719]: E0123 18:50:36.383339 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.383497 kubelet[2719]: W0123 18:50:36.383350 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.383497 kubelet[2719]: E0123 18:50:36.383361 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.386102 kubelet[2719]: E0123 18:50:36.384611 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.386102 kubelet[2719]: W0123 18:50:36.384623 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.386102 kubelet[2719]: E0123 18:50:36.384635 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.386102 kubelet[2719]: E0123 18:50:36.384908 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.386102 kubelet[2719]: W0123 18:50:36.384918 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.386102 kubelet[2719]: E0123 18:50:36.384927 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.393665 containerd[1560]: time="2026-01-23T18:50:36.393598389Z" level=info msg="connecting to shim 323fae46398d5d19a889442be296fe7ed4de063fc5619de418024728b208db31" address="unix:///run/containerd/s/9a6ad4235dd8eca75ac8eb4de44666a1e2452d3cc8addb87622eee1ae24e2a5e" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:36.409858 kubelet[2719]: E0123 18:50:36.408903 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.409858 kubelet[2719]: W0123 18:50:36.408935 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.409858 kubelet[2719]: E0123 18:50:36.408999 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.416102 kubelet[2719]: E0123 18:50:36.415876 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2b2hd" podUID="9e8a8862-2354-40b6-9db2-d22bd07a4dc3" Jan 23 18:50:36.435871 kubelet[2719]: E0123 18:50:36.435790 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.435871 kubelet[2719]: W0123 18:50:36.435842 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.443243 kubelet[2719]: E0123 18:50:36.443196 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.445037 kubelet[2719]: E0123 18:50:36.444810 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.445037 kubelet[2719]: W0123 18:50:36.444833 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.445037 kubelet[2719]: E0123 18:50:36.444847 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.459936 kubelet[2719]: E0123 18:50:36.458896 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.459936 kubelet[2719]: W0123 18:50:36.459531 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.459936 kubelet[2719]: E0123 18:50:36.459563 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.460618 kubelet[2719]: E0123 18:50:36.460454 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.460618 kubelet[2719]: W0123 18:50:36.460490 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.460618 kubelet[2719]: E0123 18:50:36.460503 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.462141 kubelet[2719]: E0123 18:50:36.460924 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.462141 kubelet[2719]: W0123 18:50:36.461395 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.462141 kubelet[2719]: E0123 18:50:36.461419 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.462141 kubelet[2719]: E0123 18:50:36.461681 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.462141 kubelet[2719]: W0123 18:50:36.461704 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.462141 kubelet[2719]: E0123 18:50:36.461721 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.462141 kubelet[2719]: E0123 18:50:36.461984 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.462141 kubelet[2719]: W0123 18:50:36.461996 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.462141 kubelet[2719]: E0123 18:50:36.462027 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.462428 kubelet[2719]: E0123 18:50:36.462250 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.462428 kubelet[2719]: W0123 18:50:36.462262 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.462428 kubelet[2719]: E0123 18:50:36.462286 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.464797 kubelet[2719]: E0123 18:50:36.462569 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.464797 kubelet[2719]: W0123 18:50:36.462617 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.464797 kubelet[2719]: E0123 18:50:36.462645 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.464797 kubelet[2719]: E0123 18:50:36.462946 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.464797 kubelet[2719]: W0123 18:50:36.462957 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.464797 kubelet[2719]: E0123 18:50:36.462973 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.464797 kubelet[2719]: E0123 18:50:36.463194 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.464797 kubelet[2719]: W0123 18:50:36.463201 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.464797 kubelet[2719]: E0123 18:50:36.463214 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.464797 kubelet[2719]: E0123 18:50:36.463436 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.465090 kubelet[2719]: W0123 18:50:36.463444 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.465090 kubelet[2719]: E0123 18:50:36.463455 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.465090 kubelet[2719]: E0123 18:50:36.463676 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.465090 kubelet[2719]: W0123 18:50:36.463685 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.465090 kubelet[2719]: E0123 18:50:36.463693 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.465090 kubelet[2719]: E0123 18:50:36.464531 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.465090 kubelet[2719]: W0123 18:50:36.464540 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.465090 kubelet[2719]: E0123 18:50:36.464551 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.465090 kubelet[2719]: E0123 18:50:36.464731 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.465090 kubelet[2719]: W0123 18:50:36.464738 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.465397 kubelet[2719]: E0123 18:50:36.464746 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.465397 kubelet[2719]: E0123 18:50:36.465003 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.465397 kubelet[2719]: W0123 18:50:36.465010 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.465397 kubelet[2719]: E0123 18:50:36.465018 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.465397 kubelet[2719]: E0123 18:50:36.465321 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.465397 kubelet[2719]: W0123 18:50:36.465332 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.465397 kubelet[2719]: E0123 18:50:36.465341 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.467122 kubelet[2719]: E0123 18:50:36.465556 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.467122 kubelet[2719]: W0123 18:50:36.465574 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.467122 kubelet[2719]: E0123 18:50:36.465586 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.467122 kubelet[2719]: E0123 18:50:36.465825 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.467122 kubelet[2719]: W0123 18:50:36.465833 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.467122 kubelet[2719]: E0123 18:50:36.465841 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.467122 kubelet[2719]: E0123 18:50:36.466039 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.467122 kubelet[2719]: W0123 18:50:36.466051 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.467122 kubelet[2719]: E0123 18:50:36.466093 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.467122 kubelet[2719]: E0123 18:50:36.466438 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.467364 kubelet[2719]: W0123 18:50:36.466446 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.467364 kubelet[2719]: E0123 18:50:36.466454 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.467364 kubelet[2719]: I0123 18:50:36.466495 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e8a8862-2354-40b6-9db2-d22bd07a4dc3-kubelet-dir\") pod \"csi-node-driver-2b2hd\" (UID: \"9e8a8862-2354-40b6-9db2-d22bd07a4dc3\") " pod="calico-system/csi-node-driver-2b2hd" Jan 23 18:50:36.467364 kubelet[2719]: E0123 18:50:36.466722 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.467364 kubelet[2719]: W0123 18:50:36.466733 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.467364 kubelet[2719]: E0123 18:50:36.466742 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.467364 kubelet[2719]: I0123 18:50:36.466755 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9e8a8862-2354-40b6-9db2-d22bd07a4dc3-varrun\") pod \"csi-node-driver-2b2hd\" (UID: \"9e8a8862-2354-40b6-9db2-d22bd07a4dc3\") " pod="calico-system/csi-node-driver-2b2hd" Jan 23 18:50:36.467589 kubelet[2719]: E0123 18:50:36.467485 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.467589 kubelet[2719]: W0123 18:50:36.467497 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.467589 kubelet[2719]: E0123 18:50:36.467505 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.467589 kubelet[2719]: I0123 18:50:36.467575 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9e8a8862-2354-40b6-9db2-d22bd07a4dc3-registration-dir\") pod \"csi-node-driver-2b2hd\" (UID: \"9e8a8862-2354-40b6-9db2-d22bd07a4dc3\") " pod="calico-system/csi-node-driver-2b2hd" Jan 23 18:50:36.470434 kubelet[2719]: E0123 18:50:36.467916 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.470434 kubelet[2719]: W0123 18:50:36.467937 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.470434 kubelet[2719]: E0123 18:50:36.467950 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.470434 kubelet[2719]: I0123 18:50:36.467998 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9e8a8862-2354-40b6-9db2-d22bd07a4dc3-socket-dir\") pod \"csi-node-driver-2b2hd\" (UID: \"9e8a8862-2354-40b6-9db2-d22bd07a4dc3\") " pod="calico-system/csi-node-driver-2b2hd" Jan 23 18:50:36.470434 kubelet[2719]: E0123 18:50:36.468337 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.470434 kubelet[2719]: W0123 18:50:36.468350 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.470434 kubelet[2719]: E0123 18:50:36.468363 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.470434 kubelet[2719]: I0123 18:50:36.468505 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpwnz\" (UniqueName: \"kubernetes.io/projected/9e8a8862-2354-40b6-9db2-d22bd07a4dc3-kube-api-access-lpwnz\") pod \"csi-node-driver-2b2hd\" (UID: \"9e8a8862-2354-40b6-9db2-d22bd07a4dc3\") " pod="calico-system/csi-node-driver-2b2hd" Jan 23 18:50:36.470434 kubelet[2719]: E0123 18:50:36.468822 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.470829 kubelet[2719]: W0123 18:50:36.468832 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.470829 kubelet[2719]: E0123 18:50:36.468840 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.470829 kubelet[2719]: E0123 18:50:36.469119 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.470829 kubelet[2719]: W0123 18:50:36.470688 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.470829 kubelet[2719]: E0123 18:50:36.470703 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.471998 kubelet[2719]: E0123 18:50:36.471003 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.471998 kubelet[2719]: W0123 18:50:36.471018 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.471998 kubelet[2719]: E0123 18:50:36.471094 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.472853 kubelet[2719]: E0123 18:50:36.472701 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.472853 kubelet[2719]: W0123 18:50:36.472720 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.472853 kubelet[2719]: E0123 18:50:36.472730 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.473702 kubelet[2719]: E0123 18:50:36.473646 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.473702 kubelet[2719]: W0123 18:50:36.473666 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.473702 kubelet[2719]: E0123 18:50:36.473680 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.474692 kubelet[2719]: E0123 18:50:36.474085 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.474692 kubelet[2719]: W0123 18:50:36.474096 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.474692 kubelet[2719]: E0123 18:50:36.474107 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.474692 kubelet[2719]: E0123 18:50:36.474505 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.474692 kubelet[2719]: W0123 18:50:36.474518 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.474692 kubelet[2719]: E0123 18:50:36.474560 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.473989 systemd[1]: Started cri-containerd-323fae46398d5d19a889442be296fe7ed4de063fc5619de418024728b208db31.scope - libcontainer container 323fae46398d5d19a889442be296fe7ed4de063fc5619de418024728b208db31. Jan 23 18:50:36.476273 kubelet[2719]: E0123 18:50:36.475005 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.476273 kubelet[2719]: W0123 18:50:36.475039 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.476273 kubelet[2719]: E0123 18:50:36.475051 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.476273 kubelet[2719]: E0123 18:50:36.475431 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.476273 kubelet[2719]: W0123 18:50:36.475454 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.476273 kubelet[2719]: E0123 18:50:36.475465 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.476273 kubelet[2719]: E0123 18:50:36.475827 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.476273 kubelet[2719]: W0123 18:50:36.475837 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.476273 kubelet[2719]: E0123 18:50:36.475847 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.487727 kubelet[2719]: E0123 18:50:36.487674 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:36.489994 containerd[1560]: time="2026-01-23T18:50:36.489842959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7qvhr,Uid:5a20f060-59c3-4a50-bef9-d2477226b041,Namespace:calico-system,Attempt:0,}" Jan 23 18:50:36.518269 containerd[1560]: time="2026-01-23T18:50:36.518190504Z" level=info msg="connecting to shim c30ca9f64b69f34818750f89eb134b30fe53fb969b6fa70e007bc168b9c63769" address="unix:///run/containerd/s/a9a1b9d3108ed0693e2f1b797e9c114c466cdc0c9070789dacfd5b3416400193" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:36.564016 systemd[1]: Started cri-containerd-c30ca9f64b69f34818750f89eb134b30fe53fb969b6fa70e007bc168b9c63769.scope - libcontainer container c30ca9f64b69f34818750f89eb134b30fe53fb969b6fa70e007bc168b9c63769. Jan 23 18:50:36.570315 kubelet[2719]: E0123 18:50:36.570272 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.570315 kubelet[2719]: W0123 18:50:36.570332 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.570465 kubelet[2719]: E0123 18:50:36.570364 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.570819 kubelet[2719]: E0123 18:50:36.570747 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.570819 kubelet[2719]: W0123 18:50:36.570763 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.570921 kubelet[2719]: E0123 18:50:36.570818 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.571495 kubelet[2719]: E0123 18:50:36.571434 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.571665 kubelet[2719]: W0123 18:50:36.571568 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.571665 kubelet[2719]: E0123 18:50:36.571580 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.572701 kubelet[2719]: E0123 18:50:36.572663 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.572701 kubelet[2719]: W0123 18:50:36.572683 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.572701 kubelet[2719]: E0123 18:50:36.572693 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.573182 kubelet[2719]: E0123 18:50:36.573014 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.573182 kubelet[2719]: W0123 18:50:36.573023 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.573661 kubelet[2719]: E0123 18:50:36.573268 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.573661 kubelet[2719]: E0123 18:50:36.573618 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.573661 kubelet[2719]: W0123 18:50:36.573627 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.573661 kubelet[2719]: E0123 18:50:36.573637 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.573980 kubelet[2719]: E0123 18:50:36.573948 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.573980 kubelet[2719]: W0123 18:50:36.573965 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.573980 kubelet[2719]: E0123 18:50:36.573974 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.574362 kubelet[2719]: E0123 18:50:36.574203 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.574362 kubelet[2719]: W0123 18:50:36.574215 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.574362 kubelet[2719]: E0123 18:50:36.574223 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.574495 kubelet[2719]: E0123 18:50:36.574444 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.574495 kubelet[2719]: W0123 18:50:36.574452 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.574495 kubelet[2719]: E0123 18:50:36.574460 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.574974 kubelet[2719]: E0123 18:50:36.574676 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.574974 kubelet[2719]: W0123 18:50:36.574713 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.574974 kubelet[2719]: E0123 18:50:36.574721 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.575327 kubelet[2719]: E0123 18:50:36.575296 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.575327 kubelet[2719]: W0123 18:50:36.575314 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.575327 kubelet[2719]: E0123 18:50:36.575323 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.575677 kubelet[2719]: E0123 18:50:36.575541 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.575677 kubelet[2719]: W0123 18:50:36.575555 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.575677 kubelet[2719]: E0123 18:50:36.575563 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.575827 kubelet[2719]: E0123 18:50:36.575808 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.575827 kubelet[2719]: W0123 18:50:36.575817 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.575827 kubelet[2719]: E0123 18:50:36.575824 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.576328 kubelet[2719]: E0123 18:50:36.576283 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.576328 kubelet[2719]: W0123 18:50:36.576298 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.576328 kubelet[2719]: E0123 18:50:36.576307 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.576657 kubelet[2719]: E0123 18:50:36.576540 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.576657 kubelet[2719]: W0123 18:50:36.576555 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.576657 kubelet[2719]: E0123 18:50:36.576562 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.577069 kubelet[2719]: E0123 18:50:36.576817 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.577069 kubelet[2719]: W0123 18:50:36.576832 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.577069 kubelet[2719]: E0123 18:50:36.576840 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.577691 kubelet[2719]: E0123 18:50:36.577636 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.577691 kubelet[2719]: W0123 18:50:36.577650 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.577691 kubelet[2719]: E0123 18:50:36.577661 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.578199 kubelet[2719]: E0123 18:50:36.577937 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.578199 kubelet[2719]: W0123 18:50:36.577951 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.578199 kubelet[2719]: E0123 18:50:36.577959 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.578199 kubelet[2719]: E0123 18:50:36.578193 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.578199 kubelet[2719]: W0123 18:50:36.578201 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.578199 kubelet[2719]: E0123 18:50:36.578209 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.578706 kubelet[2719]: E0123 18:50:36.578553 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.578706 kubelet[2719]: W0123 18:50:36.578566 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.578706 kubelet[2719]: E0123 18:50:36.578577 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.579225 kubelet[2719]: E0123 18:50:36.578904 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.579225 kubelet[2719]: W0123 18:50:36.578918 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.579225 kubelet[2719]: E0123 18:50:36.578926 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.579591 kubelet[2719]: E0123 18:50:36.579560 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.579591 kubelet[2719]: W0123 18:50:36.579580 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.579591 kubelet[2719]: E0123 18:50:36.579589 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.579891 kubelet[2719]: E0123 18:50:36.579861 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.579891 kubelet[2719]: W0123 18:50:36.579878 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.579891 kubelet[2719]: E0123 18:50:36.579888 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.580212 kubelet[2719]: E0123 18:50:36.580102 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.580212 kubelet[2719]: W0123 18:50:36.580116 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.580212 kubelet[2719]: E0123 18:50:36.580123 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.580807 kubelet[2719]: E0123 18:50:36.580748 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.580807 kubelet[2719]: W0123 18:50:36.580782 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.580807 kubelet[2719]: E0123 18:50:36.580792 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.604669 kubelet[2719]: E0123 18:50:36.604448 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:36.604669 kubelet[2719]: W0123 18:50:36.604473 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:36.604669 kubelet[2719]: E0123 18:50:36.604500 2719 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:36.636247 containerd[1560]: time="2026-01-23T18:50:36.635994492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7qvhr,Uid:5a20f060-59c3-4a50-bef9-d2477226b041,Namespace:calico-system,Attempt:0,} returns sandbox id \"c30ca9f64b69f34818750f89eb134b30fe53fb969b6fa70e007bc168b9c63769\"" Jan 23 18:50:36.640409 kubelet[2719]: E0123 18:50:36.640382 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:36.643367 containerd[1560]: time="2026-01-23T18:50:36.643320316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 18:50:36.656961 containerd[1560]: time="2026-01-23T18:50:36.656902624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f9bbc79bd-9cqh9,Uid:ff7a5a97-b367-4478-bc18-c8420f19493a,Namespace:calico-system,Attempt:0,} returns sandbox id \"323fae46398d5d19a889442be296fe7ed4de063fc5619de418024728b208db31\"" Jan 23 18:50:36.658334 kubelet[2719]: E0123 18:50:36.658264 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:37.280606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2774372252.mount: Deactivated successfully. Jan 23 18:50:37.398960 containerd[1560]: time="2026-01-23T18:50:37.398877795Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:37.400271 containerd[1560]: time="2026-01-23T18:50:37.400070977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Jan 23 18:50:37.401009 containerd[1560]: time="2026-01-23T18:50:37.400722323Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:37.402866 containerd[1560]: time="2026-01-23T18:50:37.402749032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:37.403660 containerd[1560]: time="2026-01-23T18:50:37.403399759Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 759.939151ms" Jan 23 18:50:37.403660 containerd[1560]: time="2026-01-23T18:50:37.403458139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 23 18:50:37.406078 containerd[1560]: time="2026-01-23T18:50:37.405960563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 18:50:37.408206 containerd[1560]: time="2026-01-23T18:50:37.408133583Z" level=info msg="CreateContainer within sandbox \"c30ca9f64b69f34818750f89eb134b30fe53fb969b6fa70e007bc168b9c63769\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 18:50:37.416464 containerd[1560]: time="2026-01-23T18:50:37.415332452Z" level=info msg="Container 87aad69f735ad588424f8a3e5d19d86c5b0b3806128585acad83bac429b9bd84: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:50:37.426693 containerd[1560]: time="2026-01-23T18:50:37.426661799Z" level=info msg="CreateContainer within sandbox \"c30ca9f64b69f34818750f89eb134b30fe53fb969b6fa70e007bc168b9c63769\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"87aad69f735ad588424f8a3e5d19d86c5b0b3806128585acad83bac429b9bd84\"" Jan 23 18:50:37.427945 containerd[1560]: time="2026-01-23T18:50:37.427905631Z" level=info msg="StartContainer for \"87aad69f735ad588424f8a3e5d19d86c5b0b3806128585acad83bac429b9bd84\"" Jan 23 18:50:37.430377 containerd[1560]: time="2026-01-23T18:50:37.430220823Z" level=info msg="connecting to shim 87aad69f735ad588424f8a3e5d19d86c5b0b3806128585acad83bac429b9bd84" address="unix:///run/containerd/s/a9a1b9d3108ed0693e2f1b797e9c114c466cdc0c9070789dacfd5b3416400193" protocol=ttrpc version=3 Jan 23 18:50:37.471993 systemd[1]: Started cri-containerd-87aad69f735ad588424f8a3e5d19d86c5b0b3806128585acad83bac429b9bd84.scope - libcontainer container 87aad69f735ad588424f8a3e5d19d86c5b0b3806128585acad83bac429b9bd84. Jan 23 18:50:37.560678 containerd[1560]: time="2026-01-23T18:50:37.560265416Z" level=info msg="StartContainer for \"87aad69f735ad588424f8a3e5d19d86c5b0b3806128585acad83bac429b9bd84\" returns successfully" Jan 23 18:50:37.590511 systemd[1]: cri-containerd-87aad69f735ad588424f8a3e5d19d86c5b0b3806128585acad83bac429b9bd84.scope: Deactivated successfully. Jan 23 18:50:37.595145 containerd[1560]: time="2026-01-23T18:50:37.595092986Z" level=info msg="received container exit event container_id:\"87aad69f735ad588424f8a3e5d19d86c5b0b3806128585acad83bac429b9bd84\" id:\"87aad69f735ad588424f8a3e5d19d86c5b0b3806128585acad83bac429b9bd84\" pid:3376 exited_at:{seconds:1769194237 nanos:594658612}" Jan 23 18:50:38.119831 kubelet[2719]: E0123 18:50:38.119614 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2b2hd" podUID="9e8a8862-2354-40b6-9db2-d22bd07a4dc3" Jan 23 18:50:38.228057 kubelet[2719]: E0123 18:50:38.228018 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:38.260974 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87aad69f735ad588424f8a3e5d19d86c5b0b3806128585acad83bac429b9bd84-rootfs.mount: Deactivated successfully. Jan 23 18:50:38.635439 containerd[1560]: time="2026-01-23T18:50:38.635344504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:38.636274 containerd[1560]: time="2026-01-23T18:50:38.636246682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33739890" Jan 23 18:50:38.636758 containerd[1560]: time="2026-01-23T18:50:38.636735946Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:38.638999 containerd[1560]: time="2026-01-23T18:50:38.638957296Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:38.639345 containerd[1560]: time="2026-01-23T18:50:38.639317019Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.233071144s" Jan 23 18:50:38.639389 containerd[1560]: time="2026-01-23T18:50:38.639348059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 23 18:50:38.640540 containerd[1560]: time="2026-01-23T18:50:38.640522360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 18:50:38.656852 containerd[1560]: time="2026-01-23T18:50:38.656812205Z" level=info msg="CreateContainer within sandbox \"323fae46398d5d19a889442be296fe7ed4de063fc5619de418024728b208db31\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 18:50:38.667204 containerd[1560]: time="2026-01-23T18:50:38.665941207Z" level=info msg="Container 669e8d1b64f0d86c08c9e74b510f85446aab1587b520a4cd724e91eb222e21b6: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:50:38.673452 containerd[1560]: time="2026-01-23T18:50:38.673391893Z" level=info msg="CreateContainer within sandbox \"323fae46398d5d19a889442be296fe7ed4de063fc5619de418024728b208db31\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"669e8d1b64f0d86c08c9e74b510f85446aab1587b520a4cd724e91eb222e21b6\"" Jan 23 18:50:38.674166 containerd[1560]: time="2026-01-23T18:50:38.674114300Z" level=info msg="StartContainer for \"669e8d1b64f0d86c08c9e74b510f85446aab1587b520a4cd724e91eb222e21b6\"" Jan 23 18:50:38.675502 containerd[1560]: time="2026-01-23T18:50:38.675476992Z" level=info msg="connecting to shim 669e8d1b64f0d86c08c9e74b510f85446aab1587b520a4cd724e91eb222e21b6" address="unix:///run/containerd/s/9a6ad4235dd8eca75ac8eb4de44666a1e2452d3cc8addb87622eee1ae24e2a5e" protocol=ttrpc version=3 Jan 23 18:50:38.702050 systemd[1]: Started cri-containerd-669e8d1b64f0d86c08c9e74b510f85446aab1587b520a4cd724e91eb222e21b6.scope - libcontainer container 669e8d1b64f0d86c08c9e74b510f85446aab1587b520a4cd724e91eb222e21b6. Jan 23 18:50:38.784989 containerd[1560]: time="2026-01-23T18:50:38.784916568Z" level=info msg="StartContainer for \"669e8d1b64f0d86c08c9e74b510f85446aab1587b520a4cd724e91eb222e21b6\" returns successfully" Jan 23 18:50:39.236255 kubelet[2719]: E0123 18:50:39.236185 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:39.251678 kubelet[2719]: I0123 18:50:39.250893 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f9bbc79bd-9cqh9" podStartSLOduration=1.270107715 podStartE2EDuration="3.250875304s" podCreationTimestamp="2026-01-23 18:50:36 +0000 UTC" firstStartedPulling="2026-01-23 18:50:36.659701811 +0000 UTC m=+19.655536015" lastFinishedPulling="2026-01-23 18:50:38.6404694 +0000 UTC m=+21.636303604" observedRunningTime="2026-01-23 18:50:39.249053288 +0000 UTC m=+22.244887492" watchObservedRunningTime="2026-01-23 18:50:39.250875304 +0000 UTC m=+22.246709508" Jan 23 18:50:40.118821 kubelet[2719]: E0123 18:50:40.118751 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2b2hd" podUID="9e8a8862-2354-40b6-9db2-d22bd07a4dc3" Jan 23 18:50:40.237271 kubelet[2719]: I0123 18:50:40.237241 2719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 18:50:40.237949 kubelet[2719]: E0123 18:50:40.237930 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:40.558103 containerd[1560]: time="2026-01-23T18:50:40.558050712Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:40.559175 containerd[1560]: time="2026-01-23T18:50:40.559146420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 23 18:50:40.559624 containerd[1560]: time="2026-01-23T18:50:40.559581764Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:40.562647 containerd[1560]: time="2026-01-23T18:50:40.561696841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:40.562647 containerd[1560]: time="2026-01-23T18:50:40.562331226Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 1.921688405s" Jan 23 18:50:40.562647 containerd[1560]: time="2026-01-23T18:50:40.562370766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 23 18:50:40.567281 containerd[1560]: time="2026-01-23T18:50:40.567251405Z" level=info msg="CreateContainer within sandbox \"c30ca9f64b69f34818750f89eb134b30fe53fb969b6fa70e007bc168b9c63769\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 18:50:40.580821 containerd[1560]: time="2026-01-23T18:50:40.576945561Z" level=info msg="Container 8b0874cda7dcf2649f50ff71b05de42c0bfbca58e97c344bbaa19cd0f1fc23ce: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:50:40.581612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3356448937.mount: Deactivated successfully. Jan 23 18:50:40.590262 containerd[1560]: time="2026-01-23T18:50:40.590224826Z" level=info msg="CreateContainer within sandbox \"c30ca9f64b69f34818750f89eb134b30fe53fb969b6fa70e007bc168b9c63769\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8b0874cda7dcf2649f50ff71b05de42c0bfbca58e97c344bbaa19cd0f1fc23ce\"" Jan 23 18:50:40.591116 containerd[1560]: time="2026-01-23T18:50:40.590985952Z" level=info msg="StartContainer for \"8b0874cda7dcf2649f50ff71b05de42c0bfbca58e97c344bbaa19cd0f1fc23ce\"" Jan 23 18:50:40.593198 containerd[1560]: time="2026-01-23T18:50:40.593173289Z" level=info msg="connecting to shim 8b0874cda7dcf2649f50ff71b05de42c0bfbca58e97c344bbaa19cd0f1fc23ce" address="unix:///run/containerd/s/a9a1b9d3108ed0693e2f1b797e9c114c466cdc0c9070789dacfd5b3416400193" protocol=ttrpc version=3 Jan 23 18:50:40.617972 systemd[1]: Started cri-containerd-8b0874cda7dcf2649f50ff71b05de42c0bfbca58e97c344bbaa19cd0f1fc23ce.scope - libcontainer container 8b0874cda7dcf2649f50ff71b05de42c0bfbca58e97c344bbaa19cd0f1fc23ce. Jan 23 18:50:40.716490 containerd[1560]: time="2026-01-23T18:50:40.716254132Z" level=info msg="StartContainer for \"8b0874cda7dcf2649f50ff71b05de42c0bfbca58e97c344bbaa19cd0f1fc23ce\" returns successfully" Jan 23 18:50:41.220418 containerd[1560]: time="2026-01-23T18:50:41.220382064Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 18:50:41.224741 systemd[1]: cri-containerd-8b0874cda7dcf2649f50ff71b05de42c0bfbca58e97c344bbaa19cd0f1fc23ce.scope: Deactivated successfully. Jan 23 18:50:41.225206 systemd[1]: cri-containerd-8b0874cda7dcf2649f50ff71b05de42c0bfbca58e97c344bbaa19cd0f1fc23ce.scope: Consumed 521ms CPU time, 197.7M memory peak, 171.3M written to disk. Jan 23 18:50:41.226665 containerd[1560]: time="2026-01-23T18:50:41.226619680Z" level=info msg="received container exit event container_id:\"8b0874cda7dcf2649f50ff71b05de42c0bfbca58e97c344bbaa19cd0f1fc23ce\" id:\"8b0874cda7dcf2649f50ff71b05de42c0bfbca58e97c344bbaa19cd0f1fc23ce\" pid:3475 exited_at:{seconds:1769194241 nanos:225987856}" Jan 23 18:50:41.246201 kubelet[2719]: E0123 18:50:41.246177 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:41.260170 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b0874cda7dcf2649f50ff71b05de42c0bfbca58e97c344bbaa19cd0f1fc23ce-rootfs.mount: Deactivated successfully. Jan 23 18:50:41.320270 kubelet[2719]: I0123 18:50:41.320246 2719 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 18:50:41.356019 systemd[1]: Created slice kubepods-burstable-pod4d671af6_7ef5_45f9_9202_d33ec17c60fa.slice - libcontainer container kubepods-burstable-pod4d671af6_7ef5_45f9_9202_d33ec17c60fa.slice. Jan 23 18:50:41.373795 systemd[1]: Created slice kubepods-besteffort-pod12d6ca3a_1fd4_422a_92d6_048bdc9d3706.slice - libcontainer container kubepods-besteffort-pod12d6ca3a_1fd4_422a_92d6_048bdc9d3706.slice. Jan 23 18:50:41.387288 systemd[1]: Created slice kubepods-burstable-podba3daaa2_8f67_4318_8e25_c0cfe3f5ea4d.slice - libcontainer container kubepods-burstable-podba3daaa2_8f67_4318_8e25_c0cfe3f5ea4d.slice. Jan 23 18:50:41.396461 systemd[1]: Created slice kubepods-besteffort-podd93cd3f7_0e65_42d4_b5ec_feb6f561b2d9.slice - libcontainer container kubepods-besteffort-podd93cd3f7_0e65_42d4_b5ec_feb6f561b2d9.slice. Jan 23 18:50:41.408151 systemd[1]: Created slice kubepods-besteffort-podeb0436cc_c3aa_42bb_818e_f3271ae0d24d.slice - libcontainer container kubepods-besteffort-podeb0436cc_c3aa_42bb_818e_f3271ae0d24d.slice. Jan 23 18:50:41.415935 systemd[1]: Created slice kubepods-besteffort-poda2a6e99e_bac7_4999_a563_b6f5faa05139.slice - libcontainer container kubepods-besteffort-poda2a6e99e_bac7_4999_a563_b6f5faa05139.slice. Jan 23 18:50:41.428622 systemd[1]: Created slice kubepods-besteffort-pod39b71af8_c04c_43e8_b2a3_f5d78af0b0fc.slice - libcontainer container kubepods-besteffort-pod39b71af8_c04c_43e8_b2a3_f5d78af0b0fc.slice. Jan 23 18:50:41.512627 kubelet[2719]: I0123 18:50:41.512595 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a2a6e99e-bac7-4999-a563-b6f5faa05139-calico-apiserver-certs\") pod \"calico-apiserver-5945bdff85-tm7sj\" (UID: \"a2a6e99e-bac7-4999-a563-b6f5faa05139\") " pod="calico-apiserver/calico-apiserver-5945bdff85-tm7sj" Jan 23 18:50:41.512757 kubelet[2719]: I0123 18:50:41.512636 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d671af6-7ef5-45f9-9202-d33ec17c60fa-config-volume\") pod \"coredns-674b8bbfcf-862kv\" (UID: \"4d671af6-7ef5-45f9-9202-d33ec17c60fa\") " pod="kube-system/coredns-674b8bbfcf-862kv" Jan 23 18:50:41.512757 kubelet[2719]: I0123 18:50:41.512656 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxrlf\" (UniqueName: \"kubernetes.io/projected/4d671af6-7ef5-45f9-9202-d33ec17c60fa-kube-api-access-lxrlf\") pod \"coredns-674b8bbfcf-862kv\" (UID: \"4d671af6-7ef5-45f9-9202-d33ec17c60fa\") " pod="kube-system/coredns-674b8bbfcf-862kv" Jan 23 18:50:41.512757 kubelet[2719]: I0123 18:50:41.512674 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb0436cc-c3aa-42bb-818e-f3271ae0d24d-whisker-ca-bundle\") pod \"whisker-6b559c44bb-vxwrq\" (UID: \"eb0436cc-c3aa-42bb-818e-f3271ae0d24d\") " pod="calico-system/whisker-6b559c44bb-vxwrq" Jan 23 18:50:41.512757 kubelet[2719]: I0123 18:50:41.512689 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lgnk\" (UniqueName: \"kubernetes.io/projected/12d6ca3a-1fd4-422a-92d6-048bdc9d3706-kube-api-access-8lgnk\") pod \"calico-kube-controllers-677f685ddf-wzbds\" (UID: \"12d6ca3a-1fd4-422a-92d6-048bdc9d3706\") " pod="calico-system/calico-kube-controllers-677f685ddf-wzbds" Jan 23 18:50:41.512757 kubelet[2719]: I0123 18:50:41.512708 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39b71af8-c04c-43e8-b2a3-f5d78af0b0fc-goldmane-ca-bundle\") pod \"goldmane-666569f655-tn6z6\" (UID: \"39b71af8-c04c-43e8-b2a3-f5d78af0b0fc\") " pod="calico-system/goldmane-666569f655-tn6z6" Jan 23 18:50:41.512913 kubelet[2719]: I0123 18:50:41.512723 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/39b71af8-c04c-43e8-b2a3-f5d78af0b0fc-goldmane-key-pair\") pod \"goldmane-666569f655-tn6z6\" (UID: \"39b71af8-c04c-43e8-b2a3-f5d78af0b0fc\") " pod="calico-system/goldmane-666569f655-tn6z6" Jan 23 18:50:41.512913 kubelet[2719]: I0123 18:50:41.512739 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8drl\" (UniqueName: \"kubernetes.io/projected/39b71af8-c04c-43e8-b2a3-f5d78af0b0fc-kube-api-access-f8drl\") pod \"goldmane-666569f655-tn6z6\" (UID: \"39b71af8-c04c-43e8-b2a3-f5d78af0b0fc\") " pod="calico-system/goldmane-666569f655-tn6z6" Jan 23 18:50:41.512913 kubelet[2719]: I0123 18:50:41.512760 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d93cd3f7-0e65-42d4-b5ec-feb6f561b2d9-calico-apiserver-certs\") pod \"calico-apiserver-5945bdff85-k9mzc\" (UID: \"d93cd3f7-0e65-42d4-b5ec-feb6f561b2d9\") " pod="calico-apiserver/calico-apiserver-5945bdff85-k9mzc" Jan 23 18:50:41.512913 kubelet[2719]: I0123 18:50:41.512791 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmsk8\" (UniqueName: \"kubernetes.io/projected/eb0436cc-c3aa-42bb-818e-f3271ae0d24d-kube-api-access-mmsk8\") pod \"whisker-6b559c44bb-vxwrq\" (UID: \"eb0436cc-c3aa-42bb-818e-f3271ae0d24d\") " pod="calico-system/whisker-6b559c44bb-vxwrq" Jan 23 18:50:41.512913 kubelet[2719]: I0123 18:50:41.512808 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtp9g\" (UniqueName: \"kubernetes.io/projected/d93cd3f7-0e65-42d4-b5ec-feb6f561b2d9-kube-api-access-rtp9g\") pod \"calico-apiserver-5945bdff85-k9mzc\" (UID: \"d93cd3f7-0e65-42d4-b5ec-feb6f561b2d9\") " pod="calico-apiserver/calico-apiserver-5945bdff85-k9mzc" Jan 23 18:50:41.513031 kubelet[2719]: I0123 18:50:41.512826 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12d6ca3a-1fd4-422a-92d6-048bdc9d3706-tigera-ca-bundle\") pod \"calico-kube-controllers-677f685ddf-wzbds\" (UID: \"12d6ca3a-1fd4-422a-92d6-048bdc9d3706\") " pod="calico-system/calico-kube-controllers-677f685ddf-wzbds" Jan 23 18:50:41.513031 kubelet[2719]: I0123 18:50:41.512841 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39b71af8-c04c-43e8-b2a3-f5d78af0b0fc-config\") pod \"goldmane-666569f655-tn6z6\" (UID: \"39b71af8-c04c-43e8-b2a3-f5d78af0b0fc\") " pod="calico-system/goldmane-666569f655-tn6z6" Jan 23 18:50:41.513031 kubelet[2719]: I0123 18:50:41.512860 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba3daaa2-8f67-4318-8e25-c0cfe3f5ea4d-config-volume\") pod \"coredns-674b8bbfcf-5sx8f\" (UID: \"ba3daaa2-8f67-4318-8e25-c0cfe3f5ea4d\") " pod="kube-system/coredns-674b8bbfcf-5sx8f" Jan 23 18:50:41.513031 kubelet[2719]: I0123 18:50:41.512875 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czn8c\" (UniqueName: \"kubernetes.io/projected/a2a6e99e-bac7-4999-a563-b6f5faa05139-kube-api-access-czn8c\") pod \"calico-apiserver-5945bdff85-tm7sj\" (UID: \"a2a6e99e-bac7-4999-a563-b6f5faa05139\") " pod="calico-apiserver/calico-apiserver-5945bdff85-tm7sj" Jan 23 18:50:41.513031 kubelet[2719]: I0123 18:50:41.512892 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxrkl\" (UniqueName: \"kubernetes.io/projected/ba3daaa2-8f67-4318-8e25-c0cfe3f5ea4d-kube-api-access-sxrkl\") pod \"coredns-674b8bbfcf-5sx8f\" (UID: \"ba3daaa2-8f67-4318-8e25-c0cfe3f5ea4d\") " pod="kube-system/coredns-674b8bbfcf-5sx8f" Jan 23 18:50:41.513145 kubelet[2719]: I0123 18:50:41.512909 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eb0436cc-c3aa-42bb-818e-f3271ae0d24d-whisker-backend-key-pair\") pod \"whisker-6b559c44bb-vxwrq\" (UID: \"eb0436cc-c3aa-42bb-818e-f3271ae0d24d\") " pod="calico-system/whisker-6b559c44bb-vxwrq" Jan 23 18:50:41.684092 containerd[1560]: time="2026-01-23T18:50:41.684057403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677f685ddf-wzbds,Uid:12d6ca3a-1fd4-422a-92d6-048bdc9d3706,Namespace:calico-system,Attempt:0,}" Jan 23 18:50:41.692804 kubelet[2719]: E0123 18:50:41.692080 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:41.692907 containerd[1560]: time="2026-01-23T18:50:41.692528086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5sx8f,Uid:ba3daaa2-8f67-4318-8e25-c0cfe3f5ea4d,Namespace:kube-system,Attempt:0,}" Jan 23 18:50:41.702145 containerd[1560]: time="2026-01-23T18:50:41.702060267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5945bdff85-k9mzc,Uid:d93cd3f7-0e65-42d4-b5ec-feb6f561b2d9,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:50:41.713128 containerd[1560]: time="2026-01-23T18:50:41.713027368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b559c44bb-vxwrq,Uid:eb0436cc-c3aa-42bb-818e-f3271ae0d24d,Namespace:calico-system,Attempt:0,}" Jan 23 18:50:41.725175 containerd[1560]: time="2026-01-23T18:50:41.725133048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5945bdff85-tm7sj,Uid:a2a6e99e-bac7-4999-a563-b6f5faa05139,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:50:41.734832 containerd[1560]: time="2026-01-23T18:50:41.734617679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-tn6z6,Uid:39b71af8-c04c-43e8-b2a3-f5d78af0b0fc,Namespace:calico-system,Attempt:0,}" Jan 23 18:50:41.812537 containerd[1560]: time="2026-01-23T18:50:41.812348067Z" level=error msg="Failed to destroy network for sandbox \"cb161d3190721c77fe46bd4173dd87e14bd1baad8ca3bcc39e363871d89ba423\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:41.813714 containerd[1560]: time="2026-01-23T18:50:41.813609726Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677f685ddf-wzbds,Uid:12d6ca3a-1fd4-422a-92d6-048bdc9d3706,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb161d3190721c77fe46bd4173dd87e14bd1baad8ca3bcc39e363871d89ba423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:41.814157 kubelet[2719]: E0123 18:50:41.814110 2719 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb161d3190721c77fe46bd4173dd87e14bd1baad8ca3bcc39e363871d89ba423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:41.814238 kubelet[2719]: E0123 18:50:41.814173 2719 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb161d3190721c77fe46bd4173dd87e14bd1baad8ca3bcc39e363871d89ba423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-677f685ddf-wzbds" Jan 23 18:50:41.814238 kubelet[2719]: E0123 18:50:41.814192 2719 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb161d3190721c77fe46bd4173dd87e14bd1baad8ca3bcc39e363871d89ba423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-677f685ddf-wzbds" Jan 23 18:50:41.816577 kubelet[2719]: E0123 18:50:41.814541 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-677f685ddf-wzbds_calico-system(12d6ca3a-1fd4-422a-92d6-048bdc9d3706)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-677f685ddf-wzbds_calico-system(12d6ca3a-1fd4-422a-92d6-048bdc9d3706)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb161d3190721c77fe46bd4173dd87e14bd1baad8ca3bcc39e363871d89ba423\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-677f685ddf-wzbds" podUID="12d6ca3a-1fd4-422a-92d6-048bdc9d3706" Jan 23 18:50:41.853330 containerd[1560]: time="2026-01-23T18:50:41.853230091Z" level=error msg="Failed to destroy network for sandbox \"d5241193c4247a8317d444f132eafb4f4cfcba551f0f168abe26c0f67ae84b8e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:41.855448 containerd[1560]: time="2026-01-23T18:50:41.855371047Z" level=error msg="Failed to destroy network for sandbox \"4c6f547106c6e5a07b748ce81a3c47d27aae19989fadd771342ac6dc4e1686bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:41.857188 containerd[1560]: time="2026-01-23T18:50:41.857161650Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5945bdff85-k9mzc,Uid:d93cd3f7-0e65-42d4-b5ec-feb6f561b2d9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5241193c4247a8317d444f132eafb4f4cfcba551f0f168abe26c0f67ae84b8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:41.858549 containerd[1560]: time="2026-01-23T18:50:41.858174238Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5sx8f,Uid:ba3daaa2-8f67-4318-8e25-c0cfe3f5ea4d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c6f547106c6e5a07b748ce81a3c47d27aae19989fadd771342ac6dc4e1686bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:41.858549 containerd[1560]: time="2026-01-23T18:50:41.858289069Z" level=error msg="Failed to destroy network for sandbox \"538522887088fe411c3e744be8403ab0cacdb88e1afa89c359aef8286f7c127a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:41.858928 kubelet[2719]: E0123 18:50:41.858888 2719 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5241193c4247a8317d444f132eafb4f4cfcba551f0f168abe26c0f67ae84b8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:41.859029 kubelet[2719]: E0123 18:50:41.858967 2719 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5241193c4247a8317d444f132eafb4f4cfcba551f0f168abe26c0f67ae84b8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5945bdff85-k9mzc" Jan 23 18:50:41.859029 kubelet[2719]: E0123 18:50:41.858989 2719 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5241193c4247a8317d444f132eafb4f4cfcba551f0f168abe26c0f67ae84b8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5945bdff85-k9mzc" Jan 23 18:50:41.859118 kubelet[2719]: E0123 18:50:41.859031 2719 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c6f547106c6e5a07b748ce81a3c47d27aae19989fadd771342ac6dc4e1686bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:41.859118 kubelet[2719]: E0123 18:50:41.859048 2719 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c6f547106c6e5a07b748ce81a3c47d27aae19989fadd771342ac6dc4e1686bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5sx8f" Jan 23 18:50:41.859118 kubelet[2719]: E0123 18:50:41.859066 2719 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c6f547106c6e5a07b748ce81a3c47d27aae19989fadd771342ac6dc4e1686bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5sx8f" Jan 23 18:50:41.859893 kubelet[2719]: E0123 18:50:41.859856 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5945bdff85-k9mzc_calico-apiserver(d93cd3f7-0e65-42d4-b5ec-feb6f561b2d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5945bdff85-k9mzc_calico-apiserver(d93cd3f7-0e65-42d4-b5ec-feb6f561b2d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5241193c4247a8317d444f132eafb4f4cfcba551f0f168abe26c0f67ae84b8e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5945bdff85-k9mzc" podUID="d93cd3f7-0e65-42d4-b5ec-feb6f561b2d9" Jan 23 18:50:41.859964 kubelet[2719]: E0123 18:50:41.859925 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-5sx8f_kube-system(ba3daaa2-8f67-4318-8e25-c0cfe3f5ea4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-5sx8f_kube-system(ba3daaa2-8f67-4318-8e25-c0cfe3f5ea4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c6f547106c6e5a07b748ce81a3c47d27aae19989fadd771342ac6dc4e1686bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-5sx8f" podUID="ba3daaa2-8f67-4318-8e25-c0cfe3f5ea4d" Jan 23 18:50:41.860478 containerd[1560]: time="2026-01-23T18:50:41.860453245Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5945bdff85-tm7sj,Uid:a2a6e99e-bac7-4999-a563-b6f5faa05139,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"538522887088fe411c3e744be8403ab0cacdb88e1afa89c359aef8286f7c127a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:41.860683 kubelet[2719]: E0123 18:50:41.860597 2719 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"538522887088fe411c3e744be8403ab0cacdb88e1afa89c359aef8286f7c127a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:41.860683 kubelet[2719]: E0123 18:50:41.860664 2719 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"538522887088fe411c3e744be8403ab0cacdb88e1afa89c359aef8286f7c127a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5945bdff85-tm7sj" Jan 23 18:50:41.861088 kubelet[2719]: E0123 18:50:41.860679 2719 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"538522887088fe411c3e744be8403ab0cacdb88e1afa89c359aef8286f7c127a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5945bdff85-tm7sj" Jan 23 18:50:41.861133 kubelet[2719]: E0123 18:50:41.861108 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5945bdff85-tm7sj_calico-apiserver(a2a6e99e-bac7-4999-a563-b6f5faa05139)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5945bdff85-tm7sj_calico-apiserver(a2a6e99e-bac7-4999-a563-b6f5faa05139)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"538522887088fe411c3e744be8403ab0cacdb88e1afa89c359aef8286f7c127a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5945bdff85-tm7sj" podUID="a2a6e99e-bac7-4999-a563-b6f5faa05139" Jan 23 18:50:41.884263 containerd[1560]: time="2026-01-23T18:50:41.884224112Z" level=error msg="Failed to destroy network for sandbox \"68af19f57d2a64d2e15774b5f69dab1351fb48b3a7dee47c2e5f6c746a33a905\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:41.886376 containerd[1560]: time="2026-01-23T18:50:41.886302797Z" level=error msg="Failed to destroy network for sandbox \"e069fe705e772910fe1148ac44b2c9b72895ad308204a030517f76923186bf20\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:41.887182 containerd[1560]: time="2026-01-23T18:50:41.887157643Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-tn6z6,Uid:39b71af8-c04c-43e8-b2a3-f5d78af0b0fc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"68af19f57d2a64d2e15774b5f69dab1351fb48b3a7dee47c2e5f6c746a33a905\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:41.887733 kubelet[2719]: E0123 18:50:41.887698 2719 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68af19f57d2a64d2e15774b5f69dab1351fb48b3a7dee47c2e5f6c746a33a905\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:41.888135 kubelet[2719]: E0123 18:50:41.887755 2719 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68af19f57d2a64d2e15774b5f69dab1351fb48b3a7dee47c2e5f6c746a33a905\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-tn6z6" Jan 23 18:50:41.888135 kubelet[2719]: E0123 18:50:41.887919 2719 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68af19f57d2a64d2e15774b5f69dab1351fb48b3a7dee47c2e5f6c746a33a905\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-tn6z6" Jan 23 18:50:41.888135 kubelet[2719]: E0123 18:50:41.887972 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-tn6z6_calico-system(39b71af8-c04c-43e8-b2a3-f5d78af0b0fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-tn6z6_calico-system(39b71af8-c04c-43e8-b2a3-f5d78af0b0fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68af19f57d2a64d2e15774b5f69dab1351fb48b3a7dee47c2e5f6c746a33a905\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-tn6z6" podUID="39b71af8-c04c-43e8-b2a3-f5d78af0b0fc" Jan 23 18:50:41.888486 containerd[1560]: time="2026-01-23T18:50:41.888447503Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b559c44bb-vxwrq,Uid:eb0436cc-c3aa-42bb-818e-f3271ae0d24d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e069fe705e772910fe1148ac44b2c9b72895ad308204a030517f76923186bf20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:41.888889 kubelet[2719]: E0123 18:50:41.888691 2719 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e069fe705e772910fe1148ac44b2c9b72895ad308204a030517f76923186bf20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:41.888889 kubelet[2719]: E0123 18:50:41.888740 2719 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e069fe705e772910fe1148ac44b2c9b72895ad308204a030517f76923186bf20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6b559c44bb-vxwrq" Jan 23 18:50:41.888889 kubelet[2719]: E0123 18:50:41.888756 2719 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e069fe705e772910fe1148ac44b2c9b72895ad308204a030517f76923186bf20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6b559c44bb-vxwrq" Jan 23 18:50:41.889048 kubelet[2719]: E0123 18:50:41.888815 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6b559c44bb-vxwrq_calico-system(eb0436cc-c3aa-42bb-818e-f3271ae0d24d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6b559c44bb-vxwrq_calico-system(eb0436cc-c3aa-42bb-818e-f3271ae0d24d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e069fe705e772910fe1148ac44b2c9b72895ad308204a030517f76923186bf20\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6b559c44bb-vxwrq" podUID="eb0436cc-c3aa-42bb-818e-f3271ae0d24d" Jan 23 18:50:41.964108 kubelet[2719]: E0123 18:50:41.964061 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:41.964879 containerd[1560]: time="2026-01-23T18:50:41.964828311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-862kv,Uid:4d671af6-7ef5-45f9-9202-d33ec17c60fa,Namespace:kube-system,Attempt:0,}" Jan 23 18:50:42.014204 containerd[1560]: time="2026-01-23T18:50:42.014138552Z" level=error msg="Failed to destroy network for sandbox \"3d4c5e8ca7edf8ba39e3a1051e2fdc9ee99a59e5d67c2455c2e313c0ea1ebee3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:42.015369 containerd[1560]: time="2026-01-23T18:50:42.015337451Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-862kv,Uid:4d671af6-7ef5-45f9-9202-d33ec17c60fa,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d4c5e8ca7edf8ba39e3a1051e2fdc9ee99a59e5d67c2455c2e313c0ea1ebee3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:42.015619 kubelet[2719]: E0123 18:50:42.015590 2719 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d4c5e8ca7edf8ba39e3a1051e2fdc9ee99a59e5d67c2455c2e313c0ea1ebee3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:42.015682 kubelet[2719]: E0123 18:50:42.015641 2719 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d4c5e8ca7edf8ba39e3a1051e2fdc9ee99a59e5d67c2455c2e313c0ea1ebee3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-862kv" Jan 23 18:50:42.015709 kubelet[2719]: E0123 18:50:42.015688 2719 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d4c5e8ca7edf8ba39e3a1051e2fdc9ee99a59e5d67c2455c2e313c0ea1ebee3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-862kv" Jan 23 18:50:42.016097 kubelet[2719]: E0123 18:50:42.015749 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-862kv_kube-system(4d671af6-7ef5-45f9-9202-d33ec17c60fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-862kv_kube-system(4d671af6-7ef5-45f9-9202-d33ec17c60fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d4c5e8ca7edf8ba39e3a1051e2fdc9ee99a59e5d67c2455c2e313c0ea1ebee3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-862kv" podUID="4d671af6-7ef5-45f9-9202-d33ec17c60fa" Jan 23 18:50:42.125530 systemd[1]: Created slice kubepods-besteffort-pod9e8a8862_2354_40b6_9db2_d22bd07a4dc3.slice - libcontainer container kubepods-besteffort-pod9e8a8862_2354_40b6_9db2_d22bd07a4dc3.slice. Jan 23 18:50:42.130508 containerd[1560]: time="2026-01-23T18:50:42.130372437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2b2hd,Uid:9e8a8862-2354-40b6-9db2-d22bd07a4dc3,Namespace:calico-system,Attempt:0,}" Jan 23 18:50:42.198239 containerd[1560]: time="2026-01-23T18:50:42.198197831Z" level=error msg="Failed to destroy network for sandbox \"009ef5c2bf3827ff742c5fe09e13716a1d767f38ca1e76f17b4d7911910514d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:42.199755 containerd[1560]: time="2026-01-23T18:50:42.199674162Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2b2hd,Uid:9e8a8862-2354-40b6-9db2-d22bd07a4dc3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"009ef5c2bf3827ff742c5fe09e13716a1d767f38ca1e76f17b4d7911910514d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:42.200076 kubelet[2719]: E0123 18:50:42.200031 2719 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"009ef5c2bf3827ff742c5fe09e13716a1d767f38ca1e76f17b4d7911910514d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:42.200076 kubelet[2719]: E0123 18:50:42.200080 2719 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"009ef5c2bf3827ff742c5fe09e13716a1d767f38ca1e76f17b4d7911910514d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2b2hd" Jan 23 18:50:42.200309 kubelet[2719]: E0123 18:50:42.200117 2719 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"009ef5c2bf3827ff742c5fe09e13716a1d767f38ca1e76f17b4d7911910514d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2b2hd" Jan 23 18:50:42.200309 kubelet[2719]: E0123 18:50:42.200162 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2b2hd_calico-system(9e8a8862-2354-40b6-9db2-d22bd07a4dc3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2b2hd_calico-system(9e8a8862-2354-40b6-9db2-d22bd07a4dc3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"009ef5c2bf3827ff742c5fe09e13716a1d767f38ca1e76f17b4d7911910514d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2b2hd" podUID="9e8a8862-2354-40b6-9db2-d22bd07a4dc3" Jan 23 18:50:42.253954 kubelet[2719]: E0123 18:50:42.253916 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:42.256079 containerd[1560]: time="2026-01-23T18:50:42.255963656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 18:50:46.049551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount278315376.mount: Deactivated successfully. Jan 23 18:50:46.084748 containerd[1560]: time="2026-01-23T18:50:46.084688102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:46.085862 containerd[1560]: time="2026-01-23T18:50:46.085623568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 23 18:50:46.086568 containerd[1560]: time="2026-01-23T18:50:46.086527443Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:46.088389 containerd[1560]: time="2026-01-23T18:50:46.088359593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:46.089113 containerd[1560]: time="2026-01-23T18:50:46.089087816Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 3.83306424s" Jan 23 18:50:46.089211 containerd[1560]: time="2026-01-23T18:50:46.089193768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 23 18:50:46.108412 containerd[1560]: time="2026-01-23T18:50:46.108353623Z" level=info msg="CreateContainer within sandbox \"c30ca9f64b69f34818750f89eb134b30fe53fb969b6fa70e007bc168b9c63769\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 18:50:46.119312 containerd[1560]: time="2026-01-23T18:50:46.117894486Z" level=info msg="Container 29b24d3543d111aa13ae31e32a8fafe33a254b53ad50bbee42963b7f5d88a758: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:50:46.128820 containerd[1560]: time="2026-01-23T18:50:46.128720516Z" level=info msg="CreateContainer within sandbox \"c30ca9f64b69f34818750f89eb134b30fe53fb969b6fa70e007bc168b9c63769\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"29b24d3543d111aa13ae31e32a8fafe33a254b53ad50bbee42963b7f5d88a758\"" Jan 23 18:50:46.129857 containerd[1560]: time="2026-01-23T18:50:46.129695462Z" level=info msg="StartContainer for \"29b24d3543d111aa13ae31e32a8fafe33a254b53ad50bbee42963b7f5d88a758\"" Jan 23 18:50:46.132816 containerd[1560]: time="2026-01-23T18:50:46.132780978Z" level=info msg="connecting to shim 29b24d3543d111aa13ae31e32a8fafe33a254b53ad50bbee42963b7f5d88a758" address="unix:///run/containerd/s/a9a1b9d3108ed0693e2f1b797e9c114c466cdc0c9070789dacfd5b3416400193" protocol=ttrpc version=3 Jan 23 18:50:46.181998 systemd[1]: Started cri-containerd-29b24d3543d111aa13ae31e32a8fafe33a254b53ad50bbee42963b7f5d88a758.scope - libcontainer container 29b24d3543d111aa13ae31e32a8fafe33a254b53ad50bbee42963b7f5d88a758. Jan 23 18:50:46.281132 containerd[1560]: time="2026-01-23T18:50:46.281051927Z" level=info msg="StartContainer for \"29b24d3543d111aa13ae31e32a8fafe33a254b53ad50bbee42963b7f5d88a758\" returns successfully" Jan 23 18:50:46.369038 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 18:50:46.369207 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 18:50:46.549048 kubelet[2719]: I0123 18:50:46.548972 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmsk8\" (UniqueName: \"kubernetes.io/projected/eb0436cc-c3aa-42bb-818e-f3271ae0d24d-kube-api-access-mmsk8\") pod \"eb0436cc-c3aa-42bb-818e-f3271ae0d24d\" (UID: \"eb0436cc-c3aa-42bb-818e-f3271ae0d24d\") " Jan 23 18:50:46.549048 kubelet[2719]: I0123 18:50:46.549046 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eb0436cc-c3aa-42bb-818e-f3271ae0d24d-whisker-backend-key-pair\") pod \"eb0436cc-c3aa-42bb-818e-f3271ae0d24d\" (UID: \"eb0436cc-c3aa-42bb-818e-f3271ae0d24d\") " Jan 23 18:50:46.549653 kubelet[2719]: I0123 18:50:46.549085 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb0436cc-c3aa-42bb-818e-f3271ae0d24d-whisker-ca-bundle\") pod \"eb0436cc-c3aa-42bb-818e-f3271ae0d24d\" (UID: \"eb0436cc-c3aa-42bb-818e-f3271ae0d24d\") " Jan 23 18:50:46.549653 kubelet[2719]: I0123 18:50:46.549638 2719 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb0436cc-c3aa-42bb-818e-f3271ae0d24d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "eb0436cc-c3aa-42bb-818e-f3271ae0d24d" (UID: "eb0436cc-c3aa-42bb-818e-f3271ae0d24d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 18:50:46.565445 kubelet[2719]: I0123 18:50:46.565383 2719 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0436cc-c3aa-42bb-818e-f3271ae0d24d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "eb0436cc-c3aa-42bb-818e-f3271ae0d24d" (UID: "eb0436cc-c3aa-42bb-818e-f3271ae0d24d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 18:50:46.565823 kubelet[2719]: I0123 18:50:46.565527 2719 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb0436cc-c3aa-42bb-818e-f3271ae0d24d-kube-api-access-mmsk8" (OuterVolumeSpecName: "kube-api-access-mmsk8") pod "eb0436cc-c3aa-42bb-818e-f3271ae0d24d" (UID: "eb0436cc-c3aa-42bb-818e-f3271ae0d24d"). InnerVolumeSpecName "kube-api-access-mmsk8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 18:50:46.649859 kubelet[2719]: I0123 18:50:46.649748 2719 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mmsk8\" (UniqueName: \"kubernetes.io/projected/eb0436cc-c3aa-42bb-818e-f3271ae0d24d-kube-api-access-mmsk8\") on node \"172-239-197-220\" DevicePath \"\"" Jan 23 18:50:46.649859 kubelet[2719]: I0123 18:50:46.649865 2719 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eb0436cc-c3aa-42bb-818e-f3271ae0d24d-whisker-backend-key-pair\") on node \"172-239-197-220\" DevicePath \"\"" Jan 23 18:50:46.650133 kubelet[2719]: I0123 18:50:46.649882 2719 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb0436cc-c3aa-42bb-818e-f3271ae0d24d-whisker-ca-bundle\") on node \"172-239-197-220\" DevicePath \"\"" Jan 23 18:50:47.048145 systemd[1]: var-lib-kubelet-pods-eb0436cc\x2dc3aa\x2d42bb\x2d818e\x2df3271ae0d24d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmmsk8.mount: Deactivated successfully. Jan 23 18:50:47.048889 systemd[1]: var-lib-kubelet-pods-eb0436cc\x2dc3aa\x2d42bb\x2d818e\x2df3271ae0d24d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 18:50:47.129264 systemd[1]: Removed slice kubepods-besteffort-podeb0436cc_c3aa_42bb_818e_f3271ae0d24d.slice - libcontainer container kubepods-besteffort-podeb0436cc_c3aa_42bb_818e_f3271ae0d24d.slice. Jan 23 18:50:47.288857 kubelet[2719]: E0123 18:50:47.288807 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:47.343320 kubelet[2719]: I0123 18:50:47.343026 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7qvhr" podStartSLOduration=1.8940747770000002 podStartE2EDuration="11.341370439s" podCreationTimestamp="2026-01-23 18:50:36 +0000 UTC" firstStartedPulling="2026-01-23 18:50:36.64266766 +0000 UTC m=+19.638501864" lastFinishedPulling="2026-01-23 18:50:46.089963322 +0000 UTC m=+29.085797526" observedRunningTime="2026-01-23 18:50:47.321982688 +0000 UTC m=+30.317816892" watchObservedRunningTime="2026-01-23 18:50:47.341370439 +0000 UTC m=+30.337204643" Jan 23 18:50:47.400641 systemd[1]: Created slice kubepods-besteffort-podb303f953_7602_41b2_af3d_dcb8c6c81cfb.slice - libcontainer container kubepods-besteffort-podb303f953_7602_41b2_af3d_dcb8c6c81cfb.slice. Jan 23 18:50:47.455022 kubelet[2719]: I0123 18:50:47.454838 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4cbg\" (UniqueName: \"kubernetes.io/projected/b303f953-7602-41b2-af3d-dcb8c6c81cfb-kube-api-access-k4cbg\") pod \"whisker-679f64db96-vrbwz\" (UID: \"b303f953-7602-41b2-af3d-dcb8c6c81cfb\") " pod="calico-system/whisker-679f64db96-vrbwz" Jan 23 18:50:47.456964 kubelet[2719]: I0123 18:50:47.456812 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b303f953-7602-41b2-af3d-dcb8c6c81cfb-whisker-backend-key-pair\") pod \"whisker-679f64db96-vrbwz\" (UID: \"b303f953-7602-41b2-af3d-dcb8c6c81cfb\") " pod="calico-system/whisker-679f64db96-vrbwz" Jan 23 18:50:47.456964 kubelet[2719]: I0123 18:50:47.456902 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b303f953-7602-41b2-af3d-dcb8c6c81cfb-whisker-ca-bundle\") pod \"whisker-679f64db96-vrbwz\" (UID: \"b303f953-7602-41b2-af3d-dcb8c6c81cfb\") " pod="calico-system/whisker-679f64db96-vrbwz" Jan 23 18:50:47.705997 containerd[1560]: time="2026-01-23T18:50:47.705496966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-679f64db96-vrbwz,Uid:b303f953-7602-41b2-af3d-dcb8c6c81cfb,Namespace:calico-system,Attempt:0,}" Jan 23 18:50:47.939518 kubelet[2719]: I0123 18:50:47.938807 2719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 18:50:47.941526 kubelet[2719]: E0123 18:50:47.940692 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:47.960940 systemd-networkd[1442]: calidc55ad23fe6: Link UP Jan 23 18:50:47.961225 systemd-networkd[1442]: calidc55ad23fe6: Gained carrier Jan 23 18:50:47.991128 containerd[1560]: 2026-01-23 18:50:47.757 [INFO][3839] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:50:47.991128 containerd[1560]: 2026-01-23 18:50:47.801 [INFO][3839] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--197--220-k8s-whisker--679f64db96--vrbwz-eth0 whisker-679f64db96- calico-system b303f953-7602-41b2-af3d-dcb8c6c81cfb 938 0 2026-01-23 18:50:47 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:679f64db96 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-239-197-220 whisker-679f64db96-vrbwz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calidc55ad23fe6 [] [] }} ContainerID="e755865d146971ae0f2b29e854d4230f49ef036a4392ff8c26314101c2032b53" Namespace="calico-system" Pod="whisker-679f64db96-vrbwz" WorkloadEndpoint="172--239--197--220-k8s-whisker--679f64db96--vrbwz-" Jan 23 18:50:47.991128 containerd[1560]: 2026-01-23 18:50:47.801 [INFO][3839] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e755865d146971ae0f2b29e854d4230f49ef036a4392ff8c26314101c2032b53" Namespace="calico-system" Pod="whisker-679f64db96-vrbwz" WorkloadEndpoint="172--239--197--220-k8s-whisker--679f64db96--vrbwz-eth0" Jan 23 18:50:47.991128 containerd[1560]: 2026-01-23 18:50:47.855 [INFO][3903] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e755865d146971ae0f2b29e854d4230f49ef036a4392ff8c26314101c2032b53" HandleID="k8s-pod-network.e755865d146971ae0f2b29e854d4230f49ef036a4392ff8c26314101c2032b53" Workload="172--239--197--220-k8s-whisker--679f64db96--vrbwz-eth0" Jan 23 18:50:47.991405 containerd[1560]: 2026-01-23 18:50:47.856 [INFO][3903] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e755865d146971ae0f2b29e854d4230f49ef036a4392ff8c26314101c2032b53" HandleID="k8s-pod-network.e755865d146971ae0f2b29e854d4230f49ef036a4392ff8c26314101c2032b53" Workload="172--239--197--220-k8s-whisker--679f64db96--vrbwz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000285900), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-197-220", "pod":"whisker-679f64db96-vrbwz", "timestamp":"2026-01-23 18:50:47.85593088 +0000 UTC"}, Hostname:"172-239-197-220", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:50:47.991405 containerd[1560]: 2026-01-23 18:50:47.856 [INFO][3903] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:50:47.991405 containerd[1560]: 2026-01-23 18:50:47.856 [INFO][3903] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:50:47.991405 containerd[1560]: 2026-01-23 18:50:47.856 [INFO][3903] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-197-220' Jan 23 18:50:47.991405 containerd[1560]: 2026-01-23 18:50:47.869 [INFO][3903] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e755865d146971ae0f2b29e854d4230f49ef036a4392ff8c26314101c2032b53" host="172-239-197-220" Jan 23 18:50:47.991405 containerd[1560]: 2026-01-23 18:50:47.877 [INFO][3903] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-197-220" Jan 23 18:50:47.991405 containerd[1560]: 2026-01-23 18:50:47.889 [INFO][3903] ipam/ipam.go 511: Trying affinity for 192.168.81.64/26 host="172-239-197-220" Jan 23 18:50:47.991405 containerd[1560]: 2026-01-23 18:50:47.896 [INFO][3903] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.64/26 host="172-239-197-220" Jan 23 18:50:47.991405 containerd[1560]: 2026-01-23 18:50:47.903 [INFO][3903] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.64/26 host="172-239-197-220" Jan 23 18:50:47.991405 containerd[1560]: 2026-01-23 18:50:47.903 [INFO][3903] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.81.64/26 handle="k8s-pod-network.e755865d146971ae0f2b29e854d4230f49ef036a4392ff8c26314101c2032b53" host="172-239-197-220" Jan 23 18:50:47.991694 containerd[1560]: 2026-01-23 18:50:47.912 [INFO][3903] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e755865d146971ae0f2b29e854d4230f49ef036a4392ff8c26314101c2032b53 Jan 23 18:50:47.991694 containerd[1560]: 2026-01-23 18:50:47.920 [INFO][3903] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.81.64/26 handle="k8s-pod-network.e755865d146971ae0f2b29e854d4230f49ef036a4392ff8c26314101c2032b53" host="172-239-197-220" Jan 23 18:50:47.991694 containerd[1560]: 2026-01-23 18:50:47.936 [INFO][3903] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.81.65/26] block=192.168.81.64/26 handle="k8s-pod-network.e755865d146971ae0f2b29e854d4230f49ef036a4392ff8c26314101c2032b53" host="172-239-197-220" Jan 23 18:50:47.991694 containerd[1560]: 2026-01-23 18:50:47.936 [INFO][3903] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.65/26] handle="k8s-pod-network.e755865d146971ae0f2b29e854d4230f49ef036a4392ff8c26314101c2032b53" host="172-239-197-220" Jan 23 18:50:47.991694 containerd[1560]: 2026-01-23 18:50:47.936 [INFO][3903] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:50:47.991694 containerd[1560]: 2026-01-23 18:50:47.936 [INFO][3903] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.81.65/26] IPv6=[] ContainerID="e755865d146971ae0f2b29e854d4230f49ef036a4392ff8c26314101c2032b53" HandleID="k8s-pod-network.e755865d146971ae0f2b29e854d4230f49ef036a4392ff8c26314101c2032b53" Workload="172--239--197--220-k8s-whisker--679f64db96--vrbwz-eth0" Jan 23 18:50:47.993588 containerd[1560]: 2026-01-23 18:50:47.942 [INFO][3839] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e755865d146971ae0f2b29e854d4230f49ef036a4392ff8c26314101c2032b53" Namespace="calico-system" Pod="whisker-679f64db96-vrbwz" WorkloadEndpoint="172--239--197--220-k8s-whisker--679f64db96--vrbwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--197--220-k8s-whisker--679f64db96--vrbwz-eth0", GenerateName:"whisker-679f64db96-", Namespace:"calico-system", SelfLink:"", UID:"b303f953-7602-41b2-af3d-dcb8c6c81cfb", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"679f64db96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-197-220", ContainerID:"", Pod:"whisker-679f64db96-vrbwz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.81.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidc55ad23fe6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:47.993588 containerd[1560]: 2026-01-23 18:50:47.942 [INFO][3839] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.65/32] ContainerID="e755865d146971ae0f2b29e854d4230f49ef036a4392ff8c26314101c2032b53" Namespace="calico-system" Pod="whisker-679f64db96-vrbwz" WorkloadEndpoint="172--239--197--220-k8s-whisker--679f64db96--vrbwz-eth0" Jan 23 18:50:47.995592 containerd[1560]: 2026-01-23 18:50:47.942 [INFO][3839] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc55ad23fe6 ContainerID="e755865d146971ae0f2b29e854d4230f49ef036a4392ff8c26314101c2032b53" Namespace="calico-system" Pod="whisker-679f64db96-vrbwz" WorkloadEndpoint="172--239--197--220-k8s-whisker--679f64db96--vrbwz-eth0" Jan 23 18:50:47.995592 containerd[1560]: 2026-01-23 18:50:47.959 [INFO][3839] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e755865d146971ae0f2b29e854d4230f49ef036a4392ff8c26314101c2032b53" Namespace="calico-system" Pod="whisker-679f64db96-vrbwz" WorkloadEndpoint="172--239--197--220-k8s-whisker--679f64db96--vrbwz-eth0" Jan 23 18:50:47.995649 containerd[1560]: 2026-01-23 18:50:47.960 [INFO][3839] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e755865d146971ae0f2b29e854d4230f49ef036a4392ff8c26314101c2032b53" Namespace="calico-system" Pod="whisker-679f64db96-vrbwz" WorkloadEndpoint="172--239--197--220-k8s-whisker--679f64db96--vrbwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--197--220-k8s-whisker--679f64db96--vrbwz-eth0", GenerateName:"whisker-679f64db96-", Namespace:"calico-system", SelfLink:"", UID:"b303f953-7602-41b2-af3d-dcb8c6c81cfb", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"679f64db96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-197-220", ContainerID:"e755865d146971ae0f2b29e854d4230f49ef036a4392ff8c26314101c2032b53", Pod:"whisker-679f64db96-vrbwz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.81.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidc55ad23fe6", MAC:"3a:34:d1:34:ec:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:47.995705 containerd[1560]: 2026-01-23 18:50:47.987 [INFO][3839] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e755865d146971ae0f2b29e854d4230f49ef036a4392ff8c26314101c2032b53" Namespace="calico-system" Pod="whisker-679f64db96-vrbwz" WorkloadEndpoint="172--239--197--220-k8s-whisker--679f64db96--vrbwz-eth0" Jan 23 18:50:48.051693 containerd[1560]: time="2026-01-23T18:50:48.051502874Z" level=info msg="connecting to shim e755865d146971ae0f2b29e854d4230f49ef036a4392ff8c26314101c2032b53" address="unix:///run/containerd/s/42a498452c6e62d082d1eb5295f63d62def31d88b8c9f90557a834c4f275f150" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:48.111946 systemd[1]: Started cri-containerd-e755865d146971ae0f2b29e854d4230f49ef036a4392ff8c26314101c2032b53.scope - libcontainer container e755865d146971ae0f2b29e854d4230f49ef036a4392ff8c26314101c2032b53. Jan 23 18:50:48.243524 containerd[1560]: time="2026-01-23T18:50:48.243400668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-679f64db96-vrbwz,Uid:b303f953-7602-41b2-af3d-dcb8c6c81cfb,Namespace:calico-system,Attempt:0,} returns sandbox id \"e755865d146971ae0f2b29e854d4230f49ef036a4392ff8c26314101c2032b53\"" Jan 23 18:50:48.247410 containerd[1560]: time="2026-01-23T18:50:48.247345147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:50:48.291477 kubelet[2719]: E0123 18:50:48.290903 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:48.292160 kubelet[2719]: E0123 18:50:48.291999 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:48.396750 containerd[1560]: time="2026-01-23T18:50:48.396306489Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:50:48.398391 containerd[1560]: time="2026-01-23T18:50:48.398322369Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:50:48.398992 containerd[1560]: time="2026-01-23T18:50:48.398649471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 18:50:48.399529 kubelet[2719]: E0123 18:50:48.399307 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:50:48.399978 kubelet[2719]: E0123 18:50:48.399870 2719 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:50:48.405518 kubelet[2719]: E0123 18:50:48.405041 2719 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c2dcaa451a7d498d98335548b24bb005,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k4cbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-679f64db96-vrbwz_calico-system(b303f953-7602-41b2-af3d-dcb8c6c81cfb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:50:48.410610 containerd[1560]: time="2026-01-23T18:50:48.410582719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:50:48.544787 containerd[1560]: time="2026-01-23T18:50:48.544408557Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:50:48.546490 containerd[1560]: time="2026-01-23T18:50:48.546379357Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:50:48.546490 containerd[1560]: time="2026-01-23T18:50:48.546432287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 18:50:48.546753 kubelet[2719]: E0123 18:50:48.546706 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:50:48.546841 kubelet[2719]: E0123 18:50:48.546804 2719 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:50:48.547226 kubelet[2719]: E0123 18:50:48.547038 2719 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k4cbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-679f64db96-vrbwz_calico-system(b303f953-7602-41b2-af3d-dcb8c6c81cfb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:50:48.548525 kubelet[2719]: E0123 18:50:48.548474 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-679f64db96-vrbwz" podUID="b303f953-7602-41b2-af3d-dcb8c6c81cfb" Jan 23 18:50:48.773943 systemd-networkd[1442]: vxlan.calico: Link UP Jan 23 18:50:48.773953 systemd-networkd[1442]: vxlan.calico: Gained carrier Jan 23 18:50:49.137605 kubelet[2719]: I0123 18:50:49.137513 2719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb0436cc-c3aa-42bb-818e-f3271ae0d24d" path="/var/lib/kubelet/pods/eb0436cc-c3aa-42bb-818e-f3271ae0d24d/volumes" Jan 23 18:50:49.279005 systemd-networkd[1442]: calidc55ad23fe6: Gained IPv6LL Jan 23 18:50:49.295853 kubelet[2719]: E0123 18:50:49.295479 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-679f64db96-vrbwz" podUID="b303f953-7602-41b2-af3d-dcb8c6c81cfb" Jan 23 18:50:50.239884 systemd-networkd[1442]: vxlan.calico: Gained IPv6LL Jan 23 18:50:52.121150 containerd[1560]: time="2026-01-23T18:50:52.121086540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-tn6z6,Uid:39b71af8-c04c-43e8-b2a3-f5d78af0b0fc,Namespace:calico-system,Attempt:0,}" Jan 23 18:50:52.121878 containerd[1560]: time="2026-01-23T18:50:52.121709054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677f685ddf-wzbds,Uid:12d6ca3a-1fd4-422a-92d6-048bdc9d3706,Namespace:calico-system,Attempt:0,}" Jan 23 18:50:52.260871 systemd-networkd[1442]: cali445c26e9f0e: Link UP Jan 23 18:50:52.268549 systemd-networkd[1442]: cali445c26e9f0e: Gained carrier Jan 23 18:50:52.289184 containerd[1560]: 2026-01-23 18:50:52.173 [INFO][4114] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--197--220-k8s-calico--kube--controllers--677f685ddf--wzbds-eth0 calico-kube-controllers-677f685ddf- calico-system 12d6ca3a-1fd4-422a-92d6-048bdc9d3706 870 0 2026-01-23 18:50:36 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:677f685ddf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-239-197-220 calico-kube-controllers-677f685ddf-wzbds eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali445c26e9f0e [] [] }} ContainerID="3fffc31c389a4a39053977b9cceb9179fdb285841921cd822e1d5fc67f386252" Namespace="calico-system" Pod="calico-kube-controllers-677f685ddf-wzbds" WorkloadEndpoint="172--239--197--220-k8s-calico--kube--controllers--677f685ddf--wzbds-" Jan 23 18:50:52.289184 containerd[1560]: 2026-01-23 18:50:52.173 [INFO][4114] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3fffc31c389a4a39053977b9cceb9179fdb285841921cd822e1d5fc67f386252" Namespace="calico-system" Pod="calico-kube-controllers-677f685ddf-wzbds" WorkloadEndpoint="172--239--197--220-k8s-calico--kube--controllers--677f685ddf--wzbds-eth0" Jan 23 18:50:52.289184 containerd[1560]: 2026-01-23 18:50:52.215 [INFO][4137] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3fffc31c389a4a39053977b9cceb9179fdb285841921cd822e1d5fc67f386252" HandleID="k8s-pod-network.3fffc31c389a4a39053977b9cceb9179fdb285841921cd822e1d5fc67f386252" Workload="172--239--197--220-k8s-calico--kube--controllers--677f685ddf--wzbds-eth0" Jan 23 18:50:52.289444 containerd[1560]: 2026-01-23 18:50:52.216 [INFO][4137] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3fffc31c389a4a39053977b9cceb9179fdb285841921cd822e1d5fc67f386252" HandleID="k8s-pod-network.3fffc31c389a4a39053977b9cceb9179fdb285841921cd822e1d5fc67f386252" Workload="172--239--197--220-k8s-calico--kube--controllers--677f685ddf--wzbds-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb890), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-197-220", "pod":"calico-kube-controllers-677f685ddf-wzbds", "timestamp":"2026-01-23 18:50:52.215977262 +0000 UTC"}, Hostname:"172-239-197-220", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:50:52.289444 containerd[1560]: 2026-01-23 18:50:52.216 [INFO][4137] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:50:52.289444 containerd[1560]: 2026-01-23 18:50:52.216 [INFO][4137] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:50:52.289444 containerd[1560]: 2026-01-23 18:50:52.216 [INFO][4137] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-197-220' Jan 23 18:50:52.289444 containerd[1560]: 2026-01-23 18:50:52.223 [INFO][4137] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3fffc31c389a4a39053977b9cceb9179fdb285841921cd822e1d5fc67f386252" host="172-239-197-220" Jan 23 18:50:52.289444 containerd[1560]: 2026-01-23 18:50:52.228 [INFO][4137] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-197-220" Jan 23 18:50:52.289444 containerd[1560]: 2026-01-23 18:50:52.233 [INFO][4137] ipam/ipam.go 511: Trying affinity for 192.168.81.64/26 host="172-239-197-220" Jan 23 18:50:52.289444 containerd[1560]: 2026-01-23 18:50:52.235 [INFO][4137] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.64/26 host="172-239-197-220" Jan 23 18:50:52.289444 containerd[1560]: 2026-01-23 18:50:52.237 [INFO][4137] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.64/26 host="172-239-197-220" Jan 23 18:50:52.289690 containerd[1560]: 2026-01-23 18:50:52.237 [INFO][4137] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.81.64/26 handle="k8s-pod-network.3fffc31c389a4a39053977b9cceb9179fdb285841921cd822e1d5fc67f386252" host="172-239-197-220" Jan 23 18:50:52.289690 containerd[1560]: 2026-01-23 18:50:52.238 [INFO][4137] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3fffc31c389a4a39053977b9cceb9179fdb285841921cd822e1d5fc67f386252 Jan 23 18:50:52.289690 containerd[1560]: 2026-01-23 18:50:52.242 [INFO][4137] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.81.64/26 handle="k8s-pod-network.3fffc31c389a4a39053977b9cceb9179fdb285841921cd822e1d5fc67f386252" host="172-239-197-220" Jan 23 18:50:52.289690 containerd[1560]: 2026-01-23 18:50:52.248 [INFO][4137] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.81.66/26] block=192.168.81.64/26 handle="k8s-pod-network.3fffc31c389a4a39053977b9cceb9179fdb285841921cd822e1d5fc67f386252" host="172-239-197-220" Jan 23 18:50:52.289690 containerd[1560]: 2026-01-23 18:50:52.248 [INFO][4137] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.66/26] handle="k8s-pod-network.3fffc31c389a4a39053977b9cceb9179fdb285841921cd822e1d5fc67f386252" host="172-239-197-220" Jan 23 18:50:52.289690 containerd[1560]: 2026-01-23 18:50:52.248 [INFO][4137] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:50:52.289690 containerd[1560]: 2026-01-23 18:50:52.248 [INFO][4137] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.81.66/26] IPv6=[] ContainerID="3fffc31c389a4a39053977b9cceb9179fdb285841921cd822e1d5fc67f386252" HandleID="k8s-pod-network.3fffc31c389a4a39053977b9cceb9179fdb285841921cd822e1d5fc67f386252" Workload="172--239--197--220-k8s-calico--kube--controllers--677f685ddf--wzbds-eth0" Jan 23 18:50:52.290028 containerd[1560]: 2026-01-23 18:50:52.251 [INFO][4114] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3fffc31c389a4a39053977b9cceb9179fdb285841921cd822e1d5fc67f386252" Namespace="calico-system" Pod="calico-kube-controllers-677f685ddf-wzbds" WorkloadEndpoint="172--239--197--220-k8s-calico--kube--controllers--677f685ddf--wzbds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--197--220-k8s-calico--kube--controllers--677f685ddf--wzbds-eth0", GenerateName:"calico-kube-controllers-677f685ddf-", Namespace:"calico-system", SelfLink:"", UID:"12d6ca3a-1fd4-422a-92d6-048bdc9d3706", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"677f685ddf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-197-220", ContainerID:"", Pod:"calico-kube-controllers-677f685ddf-wzbds", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.81.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali445c26e9f0e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:52.290107 containerd[1560]: 2026-01-23 18:50:52.252 [INFO][4114] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.66/32] ContainerID="3fffc31c389a4a39053977b9cceb9179fdb285841921cd822e1d5fc67f386252" Namespace="calico-system" Pod="calico-kube-controllers-677f685ddf-wzbds" WorkloadEndpoint="172--239--197--220-k8s-calico--kube--controllers--677f685ddf--wzbds-eth0" Jan 23 18:50:52.290107 containerd[1560]: 2026-01-23 18:50:52.252 [INFO][4114] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali445c26e9f0e ContainerID="3fffc31c389a4a39053977b9cceb9179fdb285841921cd822e1d5fc67f386252" Namespace="calico-system" Pod="calico-kube-controllers-677f685ddf-wzbds" WorkloadEndpoint="172--239--197--220-k8s-calico--kube--controllers--677f685ddf--wzbds-eth0" Jan 23 18:50:52.290107 containerd[1560]: 2026-01-23 18:50:52.269 [INFO][4114] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3fffc31c389a4a39053977b9cceb9179fdb285841921cd822e1d5fc67f386252" Namespace="calico-system" Pod="calico-kube-controllers-677f685ddf-wzbds" WorkloadEndpoint="172--239--197--220-k8s-calico--kube--controllers--677f685ddf--wzbds-eth0" Jan 23 18:50:52.290210 containerd[1560]: 2026-01-23 18:50:52.272 [INFO][4114] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3fffc31c389a4a39053977b9cceb9179fdb285841921cd822e1d5fc67f386252" Namespace="calico-system" Pod="calico-kube-controllers-677f685ddf-wzbds" WorkloadEndpoint="172--239--197--220-k8s-calico--kube--controllers--677f685ddf--wzbds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--197--220-k8s-calico--kube--controllers--677f685ddf--wzbds-eth0", GenerateName:"calico-kube-controllers-677f685ddf-", Namespace:"calico-system", SelfLink:"", UID:"12d6ca3a-1fd4-422a-92d6-048bdc9d3706", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"677f685ddf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-197-220", ContainerID:"3fffc31c389a4a39053977b9cceb9179fdb285841921cd822e1d5fc67f386252", Pod:"calico-kube-controllers-677f685ddf-wzbds", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.81.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali445c26e9f0e", MAC:"ee:27:bc:3f:17:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:52.290262 containerd[1560]: 2026-01-23 18:50:52.283 [INFO][4114] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3fffc31c389a4a39053977b9cceb9179fdb285841921cd822e1d5fc67f386252" Namespace="calico-system" Pod="calico-kube-controllers-677f685ddf-wzbds" WorkloadEndpoint="172--239--197--220-k8s-calico--kube--controllers--677f685ddf--wzbds-eth0" Jan 23 18:50:52.321313 containerd[1560]: time="2026-01-23T18:50:52.321270453Z" level=info msg="connecting to shim 3fffc31c389a4a39053977b9cceb9179fdb285841921cd822e1d5fc67f386252" address="unix:///run/containerd/s/706bf111905edaa03e9f40dbc06ed515c17a8e160c88c31cd7f4d61ded80c649" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:52.370450 systemd[1]: Started cri-containerd-3fffc31c389a4a39053977b9cceb9179fdb285841921cd822e1d5fc67f386252.scope - libcontainer container 3fffc31c389a4a39053977b9cceb9179fdb285841921cd822e1d5fc67f386252. Jan 23 18:50:52.372398 systemd-networkd[1442]: cali5b249d16254: Link UP Jan 23 18:50:52.375333 systemd-networkd[1442]: cali5b249d16254: Gained carrier Jan 23 18:50:52.398563 containerd[1560]: 2026-01-23 18:50:52.177 [INFO][4113] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--197--220-k8s-goldmane--666569f655--tn6z6-eth0 goldmane-666569f655- calico-system 39b71af8-c04c-43e8-b2a3-f5d78af0b0fc 874 0 2026-01-23 18:50:34 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-239-197-220 goldmane-666569f655-tn6z6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5b249d16254 [] [] }} ContainerID="73683592d686605bd582dc0608c1ca3f053e6214a4cc2fbecd7af059b2923a1a" Namespace="calico-system" Pod="goldmane-666569f655-tn6z6" WorkloadEndpoint="172--239--197--220-k8s-goldmane--666569f655--tn6z6-" Jan 23 18:50:52.398563 containerd[1560]: 2026-01-23 18:50:52.177 [INFO][4113] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="73683592d686605bd582dc0608c1ca3f053e6214a4cc2fbecd7af059b2923a1a" Namespace="calico-system" Pod="goldmane-666569f655-tn6z6" WorkloadEndpoint="172--239--197--220-k8s-goldmane--666569f655--tn6z6-eth0" Jan 23 18:50:52.398563 containerd[1560]: 2026-01-23 18:50:52.221 [INFO][4142] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="73683592d686605bd582dc0608c1ca3f053e6214a4cc2fbecd7af059b2923a1a" HandleID="k8s-pod-network.73683592d686605bd582dc0608c1ca3f053e6214a4cc2fbecd7af059b2923a1a" Workload="172--239--197--220-k8s-goldmane--666569f655--tn6z6-eth0" Jan 23 18:50:52.398730 containerd[1560]: 2026-01-23 18:50:52.222 [INFO][4142] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="73683592d686605bd582dc0608c1ca3f053e6214a4cc2fbecd7af059b2923a1a" HandleID="k8s-pod-network.73683592d686605bd582dc0608c1ca3f053e6214a4cc2fbecd7af059b2923a1a" Workload="172--239--197--220-k8s-goldmane--666569f655--tn6z6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-197-220", "pod":"goldmane-666569f655-tn6z6", "timestamp":"2026-01-23 18:50:52.221984476 +0000 UTC"}, Hostname:"172-239-197-220", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:50:52.398730 containerd[1560]: 2026-01-23 18:50:52.222 [INFO][4142] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:50:52.398730 containerd[1560]: 2026-01-23 18:50:52.248 [INFO][4142] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:50:52.398730 containerd[1560]: 2026-01-23 18:50:52.248 [INFO][4142] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-197-220' Jan 23 18:50:52.398730 containerd[1560]: 2026-01-23 18:50:52.325 [INFO][4142] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.73683592d686605bd582dc0608c1ca3f053e6214a4cc2fbecd7af059b2923a1a" host="172-239-197-220" Jan 23 18:50:52.398730 containerd[1560]: 2026-01-23 18:50:52.332 [INFO][4142] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-197-220" Jan 23 18:50:52.398730 containerd[1560]: 2026-01-23 18:50:52.337 [INFO][4142] ipam/ipam.go 511: Trying affinity for 192.168.81.64/26 host="172-239-197-220" Jan 23 18:50:52.398730 containerd[1560]: 2026-01-23 18:50:52.341 [INFO][4142] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.64/26 host="172-239-197-220" Jan 23 18:50:52.398730 containerd[1560]: 2026-01-23 18:50:52.343 [INFO][4142] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.64/26 host="172-239-197-220" Jan 23 18:50:52.398730 containerd[1560]: 2026-01-23 18:50:52.343 [INFO][4142] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.81.64/26 handle="k8s-pod-network.73683592d686605bd582dc0608c1ca3f053e6214a4cc2fbecd7af059b2923a1a" host="172-239-197-220" Jan 23 18:50:52.400157 containerd[1560]: 2026-01-23 18:50:52.345 [INFO][4142] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.73683592d686605bd582dc0608c1ca3f053e6214a4cc2fbecd7af059b2923a1a Jan 23 18:50:52.400157 containerd[1560]: 2026-01-23 18:50:52.349 [INFO][4142] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.81.64/26 handle="k8s-pod-network.73683592d686605bd582dc0608c1ca3f053e6214a4cc2fbecd7af059b2923a1a" host="172-239-197-220" Jan 23 18:50:52.400157 containerd[1560]: 2026-01-23 18:50:52.356 [INFO][4142] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.81.67/26] block=192.168.81.64/26 handle="k8s-pod-network.73683592d686605bd582dc0608c1ca3f053e6214a4cc2fbecd7af059b2923a1a" host="172-239-197-220" Jan 23 18:50:52.400157 containerd[1560]: 2026-01-23 18:50:52.356 [INFO][4142] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.67/26] handle="k8s-pod-network.73683592d686605bd582dc0608c1ca3f053e6214a4cc2fbecd7af059b2923a1a" host="172-239-197-220" Jan 23 18:50:52.400157 containerd[1560]: 2026-01-23 18:50:52.356 [INFO][4142] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:50:52.400157 containerd[1560]: 2026-01-23 18:50:52.356 [INFO][4142] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.81.67/26] IPv6=[] ContainerID="73683592d686605bd582dc0608c1ca3f053e6214a4cc2fbecd7af059b2923a1a" HandleID="k8s-pod-network.73683592d686605bd582dc0608c1ca3f053e6214a4cc2fbecd7af059b2923a1a" Workload="172--239--197--220-k8s-goldmane--666569f655--tn6z6-eth0" Jan 23 18:50:52.400324 containerd[1560]: 2026-01-23 18:50:52.365 [INFO][4113] cni-plugin/k8s.go 418: Populated endpoint ContainerID="73683592d686605bd582dc0608c1ca3f053e6214a4cc2fbecd7af059b2923a1a" Namespace="calico-system" Pod="goldmane-666569f655-tn6z6" WorkloadEndpoint="172--239--197--220-k8s-goldmane--666569f655--tn6z6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--197--220-k8s-goldmane--666569f655--tn6z6-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"39b71af8-c04c-43e8-b2a3-f5d78af0b0fc", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-197-220", ContainerID:"", Pod:"goldmane-666569f655-tn6z6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.81.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5b249d16254", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:52.400324 containerd[1560]: 2026-01-23 18:50:52.366 [INFO][4113] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.67/32] ContainerID="73683592d686605bd582dc0608c1ca3f053e6214a4cc2fbecd7af059b2923a1a" Namespace="calico-system" Pod="goldmane-666569f655-tn6z6" WorkloadEndpoint="172--239--197--220-k8s-goldmane--666569f655--tn6z6-eth0" Jan 23 18:50:52.400408 containerd[1560]: 2026-01-23 18:50:52.366 [INFO][4113] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b249d16254 ContainerID="73683592d686605bd582dc0608c1ca3f053e6214a4cc2fbecd7af059b2923a1a" Namespace="calico-system" Pod="goldmane-666569f655-tn6z6" WorkloadEndpoint="172--239--197--220-k8s-goldmane--666569f655--tn6z6-eth0" Jan 23 18:50:52.400408 containerd[1560]: 2026-01-23 18:50:52.370 [INFO][4113] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="73683592d686605bd582dc0608c1ca3f053e6214a4cc2fbecd7af059b2923a1a" Namespace="calico-system" Pod="goldmane-666569f655-tn6z6" WorkloadEndpoint="172--239--197--220-k8s-goldmane--666569f655--tn6z6-eth0" Jan 23 18:50:52.400455 containerd[1560]: 2026-01-23 18:50:52.371 [INFO][4113] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="73683592d686605bd582dc0608c1ca3f053e6214a4cc2fbecd7af059b2923a1a" Namespace="calico-system" Pod="goldmane-666569f655-tn6z6" WorkloadEndpoint="172--239--197--220-k8s-goldmane--666569f655--tn6z6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--197--220-k8s-goldmane--666569f655--tn6z6-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"39b71af8-c04c-43e8-b2a3-f5d78af0b0fc", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-197-220", ContainerID:"73683592d686605bd582dc0608c1ca3f053e6214a4cc2fbecd7af059b2923a1a", Pod:"goldmane-666569f655-tn6z6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.81.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5b249d16254", MAC:"4a:dd:75:47:59:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:52.400527 containerd[1560]: 2026-01-23 18:50:52.392 [INFO][4113] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="73683592d686605bd582dc0608c1ca3f053e6214a4cc2fbecd7af059b2923a1a" Namespace="calico-system" Pod="goldmane-666569f655-tn6z6" WorkloadEndpoint="172--239--197--220-k8s-goldmane--666569f655--tn6z6-eth0" Jan 23 18:50:52.483227 containerd[1560]: time="2026-01-23T18:50:52.483157077Z" level=info msg="connecting to shim 73683592d686605bd582dc0608c1ca3f053e6214a4cc2fbecd7af059b2923a1a" address="unix:///run/containerd/s/15f4a82bdca6597f6362e3fb53d03580471b2c49ff2f746ef15c7a1005bb66cd" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:52.492086 containerd[1560]: time="2026-01-23T18:50:52.491968442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677f685ddf-wzbds,Uid:12d6ca3a-1fd4-422a-92d6-048bdc9d3706,Namespace:calico-system,Attempt:0,} returns sandbox id \"3fffc31c389a4a39053977b9cceb9179fdb285841921cd822e1d5fc67f386252\"" Jan 23 18:50:52.495961 containerd[1560]: time="2026-01-23T18:50:52.495940667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:50:52.513934 systemd[1]: Started cri-containerd-73683592d686605bd582dc0608c1ca3f053e6214a4cc2fbecd7af059b2923a1a.scope - libcontainer container 73683592d686605bd582dc0608c1ca3f053e6214a4cc2fbecd7af059b2923a1a. Jan 23 18:50:52.596890 containerd[1560]: time="2026-01-23T18:50:52.596833012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-tn6z6,Uid:39b71af8-c04c-43e8-b2a3-f5d78af0b0fc,Namespace:calico-system,Attempt:0,} returns sandbox id \"73683592d686605bd582dc0608c1ca3f053e6214a4cc2fbecd7af059b2923a1a\"" Jan 23 18:50:52.630216 containerd[1560]: time="2026-01-23T18:50:52.630012532Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:50:52.631760 containerd[1560]: time="2026-01-23T18:50:52.631593377Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:50:52.631760 containerd[1560]: time="2026-01-23T18:50:52.631602537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 18:50:52.632147 kubelet[2719]: E0123 18:50:52.632052 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:50:52.632147 kubelet[2719]: E0123 18:50:52.632141 2719 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:50:52.632663 kubelet[2719]: E0123 18:50:52.632454 2719 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8lgnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-677f685ddf-wzbds_calico-system(12d6ca3a-1fd4-422a-92d6-048bdc9d3706): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:50:52.633126 containerd[1560]: time="2026-01-23T18:50:52.633088183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:50:52.634272 kubelet[2719]: E0123 18:50:52.634220 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-677f685ddf-wzbds" podUID="12d6ca3a-1fd4-422a-92d6-048bdc9d3706" Jan 23 18:50:52.758419 containerd[1560]: time="2026-01-23T18:50:52.758336533Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:50:52.759528 containerd[1560]: time="2026-01-23T18:50:52.759434897Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:50:52.759727 containerd[1560]: time="2026-01-23T18:50:52.759551328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 18:50:52.759829 kubelet[2719]: E0123 18:50:52.759763 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:50:52.759900 kubelet[2719]: E0123 18:50:52.759844 2719 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:50:52.760053 kubelet[2719]: E0123 18:50:52.759992 2719 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8drl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-tn6z6_calico-system(39b71af8-c04c-43e8-b2a3-f5d78af0b0fc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:50:52.761425 kubelet[2719]: E0123 18:50:52.761389 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tn6z6" podUID="39b71af8-c04c-43e8-b2a3-f5d78af0b0fc" Jan 23 18:50:53.305269 kubelet[2719]: E0123 18:50:53.305180 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-677f685ddf-wzbds" podUID="12d6ca3a-1fd4-422a-92d6-048bdc9d3706" Jan 23 18:50:53.307741 kubelet[2719]: E0123 18:50:53.307696 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tn6z6" podUID="39b71af8-c04c-43e8-b2a3-f5d78af0b0fc" Jan 23 18:50:53.567951 systemd-networkd[1442]: cali445c26e9f0e: Gained IPv6LL Jan 23 18:50:54.123244 kubelet[2719]: E0123 18:50:54.122249 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:54.128972 containerd[1560]: time="2026-01-23T18:50:54.123092158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5sx8f,Uid:ba3daaa2-8f67-4318-8e25-c0cfe3f5ea4d,Namespace:kube-system,Attempt:0,}" Jan 23 18:50:54.128972 containerd[1560]: time="2026-01-23T18:50:54.123524669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5945bdff85-tm7sj,Uid:a2a6e99e-bac7-4999-a563-b6f5faa05139,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:50:54.128972 containerd[1560]: time="2026-01-23T18:50:54.123751550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2b2hd,Uid:9e8a8862-2354-40b6-9db2-d22bd07a4dc3,Namespace:calico-system,Attempt:0,}" Jan 23 18:50:54.279687 systemd-networkd[1442]: cali5b249d16254: Gained IPv6LL Jan 23 18:50:54.312356 kubelet[2719]: E0123 18:50:54.311434 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-677f685ddf-wzbds" podUID="12d6ca3a-1fd4-422a-92d6-048bdc9d3706" Jan 23 18:50:54.312356 kubelet[2719]: E0123 18:50:54.312258 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tn6z6" podUID="39b71af8-c04c-43e8-b2a3-f5d78af0b0fc" Jan 23 18:50:54.342278 systemd-networkd[1442]: caliea9b386deaf: Link UP Jan 23 18:50:54.343387 systemd-networkd[1442]: caliea9b386deaf: Gained carrier Jan 23 18:50:54.359647 containerd[1560]: 2026-01-23 18:50:54.197 [INFO][4289] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--197--220-k8s-csi--node--driver--2b2hd-eth0 csi-node-driver- calico-system 9e8a8862-2354-40b6-9db2-d22bd07a4dc3 766 0 2026-01-23 18:50:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-239-197-220 csi-node-driver-2b2hd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliea9b386deaf [] [] }} ContainerID="85de8d71282d1de319587bbc180587f738cbdef5e884fc40157e36ae34861318" Namespace="calico-system" Pod="csi-node-driver-2b2hd" WorkloadEndpoint="172--239--197--220-k8s-csi--node--driver--2b2hd-" Jan 23 18:50:54.359647 containerd[1560]: 2026-01-23 18:50:54.197 [INFO][4289] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="85de8d71282d1de319587bbc180587f738cbdef5e884fc40157e36ae34861318" Namespace="calico-system" Pod="csi-node-driver-2b2hd" WorkloadEndpoint="172--239--197--220-k8s-csi--node--driver--2b2hd-eth0" Jan 23 18:50:54.359647 containerd[1560]: 2026-01-23 18:50:54.256 [INFO][4310] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="85de8d71282d1de319587bbc180587f738cbdef5e884fc40157e36ae34861318" HandleID="k8s-pod-network.85de8d71282d1de319587bbc180587f738cbdef5e884fc40157e36ae34861318" Workload="172--239--197--220-k8s-csi--node--driver--2b2hd-eth0" Jan 23 18:50:54.360211 containerd[1560]: 2026-01-23 18:50:54.257 [INFO][4310] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="85de8d71282d1de319587bbc180587f738cbdef5e884fc40157e36ae34861318" HandleID="k8s-pod-network.85de8d71282d1de319587bbc180587f738cbdef5e884fc40157e36ae34861318" Workload="172--239--197--220-k8s-csi--node--driver--2b2hd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-197-220", "pod":"csi-node-driver-2b2hd", "timestamp":"2026-01-23 18:50:54.256799356 +0000 UTC"}, Hostname:"172-239-197-220", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:50:54.360211 containerd[1560]: 2026-01-23 18:50:54.257 [INFO][4310] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:50:54.360211 containerd[1560]: 2026-01-23 18:50:54.257 [INFO][4310] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:50:54.360211 containerd[1560]: 2026-01-23 18:50:54.258 [INFO][4310] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-197-220' Jan 23 18:50:54.360211 containerd[1560]: 2026-01-23 18:50:54.281 [INFO][4310] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.85de8d71282d1de319587bbc180587f738cbdef5e884fc40157e36ae34861318" host="172-239-197-220" Jan 23 18:50:54.360211 containerd[1560]: 2026-01-23 18:50:54.294 [INFO][4310] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-197-220" Jan 23 18:50:54.360211 containerd[1560]: 2026-01-23 18:50:54.300 [INFO][4310] ipam/ipam.go 511: Trying affinity for 192.168.81.64/26 host="172-239-197-220" Jan 23 18:50:54.360211 containerd[1560]: 2026-01-23 18:50:54.302 [INFO][4310] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.64/26 host="172-239-197-220" Jan 23 18:50:54.360211 containerd[1560]: 2026-01-23 18:50:54.305 [INFO][4310] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.64/26 host="172-239-197-220" Jan 23 18:50:54.360211 containerd[1560]: 2026-01-23 18:50:54.305 [INFO][4310] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.81.64/26 handle="k8s-pod-network.85de8d71282d1de319587bbc180587f738cbdef5e884fc40157e36ae34861318" host="172-239-197-220" Jan 23 18:50:54.360438 containerd[1560]: 2026-01-23 18:50:54.306 [INFO][4310] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.85de8d71282d1de319587bbc180587f738cbdef5e884fc40157e36ae34861318 Jan 23 18:50:54.360438 containerd[1560]: 2026-01-23 18:50:54.314 [INFO][4310] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.81.64/26 handle="k8s-pod-network.85de8d71282d1de319587bbc180587f738cbdef5e884fc40157e36ae34861318" host="172-239-197-220" Jan 23 18:50:54.360438 containerd[1560]: 2026-01-23 18:50:54.322 [INFO][4310] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.81.68/26] block=192.168.81.64/26 handle="k8s-pod-network.85de8d71282d1de319587bbc180587f738cbdef5e884fc40157e36ae34861318" host="172-239-197-220" Jan 23 18:50:54.360438 containerd[1560]: 2026-01-23 18:50:54.322 [INFO][4310] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.68/26] handle="k8s-pod-network.85de8d71282d1de319587bbc180587f738cbdef5e884fc40157e36ae34861318" host="172-239-197-220" Jan 23 18:50:54.360438 containerd[1560]: 2026-01-23 18:50:54.323 [INFO][4310] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:50:54.360438 containerd[1560]: 2026-01-23 18:50:54.324 [INFO][4310] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.81.68/26] IPv6=[] ContainerID="85de8d71282d1de319587bbc180587f738cbdef5e884fc40157e36ae34861318" HandleID="k8s-pod-network.85de8d71282d1de319587bbc180587f738cbdef5e884fc40157e36ae34861318" Workload="172--239--197--220-k8s-csi--node--driver--2b2hd-eth0" Jan 23 18:50:54.360557 containerd[1560]: 2026-01-23 18:50:54.329 [INFO][4289] cni-plugin/k8s.go 418: Populated endpoint ContainerID="85de8d71282d1de319587bbc180587f738cbdef5e884fc40157e36ae34861318" Namespace="calico-system" Pod="csi-node-driver-2b2hd" WorkloadEndpoint="172--239--197--220-k8s-csi--node--driver--2b2hd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--197--220-k8s-csi--node--driver--2b2hd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9e8a8862-2354-40b6-9db2-d22bd07a4dc3", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-197-220", ContainerID:"", Pod:"csi-node-driver-2b2hd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.81.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliea9b386deaf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:54.360614 containerd[1560]: 2026-01-23 18:50:54.329 [INFO][4289] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.68/32] ContainerID="85de8d71282d1de319587bbc180587f738cbdef5e884fc40157e36ae34861318" Namespace="calico-system" Pod="csi-node-driver-2b2hd" WorkloadEndpoint="172--239--197--220-k8s-csi--node--driver--2b2hd-eth0" Jan 23 18:50:54.360614 containerd[1560]: 2026-01-23 18:50:54.329 [INFO][4289] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliea9b386deaf ContainerID="85de8d71282d1de319587bbc180587f738cbdef5e884fc40157e36ae34861318" Namespace="calico-system" Pod="csi-node-driver-2b2hd" WorkloadEndpoint="172--239--197--220-k8s-csi--node--driver--2b2hd-eth0" Jan 23 18:50:54.360614 containerd[1560]: 2026-01-23 18:50:54.344 [INFO][4289] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="85de8d71282d1de319587bbc180587f738cbdef5e884fc40157e36ae34861318" Namespace="calico-system" Pod="csi-node-driver-2b2hd" WorkloadEndpoint="172--239--197--220-k8s-csi--node--driver--2b2hd-eth0" Jan 23 18:50:54.360674 containerd[1560]: 2026-01-23 18:50:54.345 [INFO][4289] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="85de8d71282d1de319587bbc180587f738cbdef5e884fc40157e36ae34861318" Namespace="calico-system" Pod="csi-node-driver-2b2hd" WorkloadEndpoint="172--239--197--220-k8s-csi--node--driver--2b2hd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--197--220-k8s-csi--node--driver--2b2hd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9e8a8862-2354-40b6-9db2-d22bd07a4dc3", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-197-220", ContainerID:"85de8d71282d1de319587bbc180587f738cbdef5e884fc40157e36ae34861318", Pod:"csi-node-driver-2b2hd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.81.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliea9b386deaf", MAC:"3a:e4:8c:02:eb:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:54.360728 containerd[1560]: 2026-01-23 18:50:54.356 [INFO][4289] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="85de8d71282d1de319587bbc180587f738cbdef5e884fc40157e36ae34861318" Namespace="calico-system" Pod="csi-node-driver-2b2hd" WorkloadEndpoint="172--239--197--220-k8s-csi--node--driver--2b2hd-eth0" Jan 23 18:50:54.392230 containerd[1560]: time="2026-01-23T18:50:54.392036938Z" level=info msg="connecting to shim 85de8d71282d1de319587bbc180587f738cbdef5e884fc40157e36ae34861318" address="unix:///run/containerd/s/5da2bdbac220245143b9fdcde0a5a7650edfada30f6f5bbdc44250ae99d18d71" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:54.438086 systemd[1]: Started cri-containerd-85de8d71282d1de319587bbc180587f738cbdef5e884fc40157e36ae34861318.scope - libcontainer container 85de8d71282d1de319587bbc180587f738cbdef5e884fc40157e36ae34861318. Jan 23 18:50:54.451313 systemd-networkd[1442]: calid1e904a5ecd: Link UP Jan 23 18:50:54.452219 systemd-networkd[1442]: calid1e904a5ecd: Gained carrier Jan 23 18:50:54.473972 containerd[1560]: 2026-01-23 18:50:54.229 [INFO][4275] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--197--220-k8s-calico--apiserver--5945bdff85--tm7sj-eth0 calico-apiserver-5945bdff85- calico-apiserver a2a6e99e-bac7-4999-a563-b6f5faa05139 872 0 2026-01-23 18:50:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5945bdff85 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-239-197-220 calico-apiserver-5945bdff85-tm7sj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid1e904a5ecd [] [] }} ContainerID="f85dcadddf08e46a235011a5bd81b66a7d82c662d65f2e1714bea666ef3940cf" Namespace="calico-apiserver" Pod="calico-apiserver-5945bdff85-tm7sj" WorkloadEndpoint="172--239--197--220-k8s-calico--apiserver--5945bdff85--tm7sj-" Jan 23 18:50:54.473972 containerd[1560]: 2026-01-23 18:50:54.229 [INFO][4275] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f85dcadddf08e46a235011a5bd81b66a7d82c662d65f2e1714bea666ef3940cf" Namespace="calico-apiserver" Pod="calico-apiserver-5945bdff85-tm7sj" WorkloadEndpoint="172--239--197--220-k8s-calico--apiserver--5945bdff85--tm7sj-eth0" Jan 23 18:50:54.473972 containerd[1560]: 2026-01-23 18:50:54.270 [INFO][4320] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f85dcadddf08e46a235011a5bd81b66a7d82c662d65f2e1714bea666ef3940cf" HandleID="k8s-pod-network.f85dcadddf08e46a235011a5bd81b66a7d82c662d65f2e1714bea666ef3940cf" Workload="172--239--197--220-k8s-calico--apiserver--5945bdff85--tm7sj-eth0" Jan 23 18:50:54.474378 containerd[1560]: 2026-01-23 18:50:54.288 [INFO][4320] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f85dcadddf08e46a235011a5bd81b66a7d82c662d65f2e1714bea666ef3940cf" HandleID="k8s-pod-network.f85dcadddf08e46a235011a5bd81b66a7d82c662d65f2e1714bea666ef3940cf" Workload="172--239--197--220-k8s-calico--apiserver--5945bdff85--tm7sj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5cc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-239-197-220", "pod":"calico-apiserver-5945bdff85-tm7sj", "timestamp":"2026-01-23 18:50:54.270679483 +0000 UTC"}, Hostname:"172-239-197-220", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:50:54.474378 containerd[1560]: 2026-01-23 18:50:54.289 [INFO][4320] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:50:54.474378 containerd[1560]: 2026-01-23 18:50:54.322 [INFO][4320] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:50:54.474378 containerd[1560]: 2026-01-23 18:50:54.322 [INFO][4320] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-197-220' Jan 23 18:50:54.474378 containerd[1560]: 2026-01-23 18:50:54.379 [INFO][4320] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f85dcadddf08e46a235011a5bd81b66a7d82c662d65f2e1714bea666ef3940cf" host="172-239-197-220" Jan 23 18:50:54.474378 containerd[1560]: 2026-01-23 18:50:54.394 [INFO][4320] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-197-220" Jan 23 18:50:54.474378 containerd[1560]: 2026-01-23 18:50:54.406 [INFO][4320] ipam/ipam.go 511: Trying affinity for 192.168.81.64/26 host="172-239-197-220" Jan 23 18:50:54.474378 containerd[1560]: 2026-01-23 18:50:54.409 [INFO][4320] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.64/26 host="172-239-197-220" Jan 23 18:50:54.474378 containerd[1560]: 2026-01-23 18:50:54.412 [INFO][4320] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.64/26 host="172-239-197-220" Jan 23 18:50:54.474600 containerd[1560]: 2026-01-23 18:50:54.412 [INFO][4320] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.81.64/26 handle="k8s-pod-network.f85dcadddf08e46a235011a5bd81b66a7d82c662d65f2e1714bea666ef3940cf" host="172-239-197-220" Jan 23 18:50:54.474600 containerd[1560]: 2026-01-23 18:50:54.414 [INFO][4320] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f85dcadddf08e46a235011a5bd81b66a7d82c662d65f2e1714bea666ef3940cf Jan 23 18:50:54.474600 containerd[1560]: 2026-01-23 18:50:54.420 [INFO][4320] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.81.64/26 handle="k8s-pod-network.f85dcadddf08e46a235011a5bd81b66a7d82c662d65f2e1714bea666ef3940cf" host="172-239-197-220" Jan 23 18:50:54.474600 containerd[1560]: 2026-01-23 18:50:54.433 [INFO][4320] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.81.69/26] block=192.168.81.64/26 handle="k8s-pod-network.f85dcadddf08e46a235011a5bd81b66a7d82c662d65f2e1714bea666ef3940cf" host="172-239-197-220" Jan 23 18:50:54.474600 containerd[1560]: 2026-01-23 18:50:54.433 [INFO][4320] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.69/26] handle="k8s-pod-network.f85dcadddf08e46a235011a5bd81b66a7d82c662d65f2e1714bea666ef3940cf" host="172-239-197-220" Jan 23 18:50:54.474600 containerd[1560]: 2026-01-23 18:50:54.433 [INFO][4320] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:50:54.474600 containerd[1560]: 2026-01-23 18:50:54.433 [INFO][4320] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.81.69/26] IPv6=[] ContainerID="f85dcadddf08e46a235011a5bd81b66a7d82c662d65f2e1714bea666ef3940cf" HandleID="k8s-pod-network.f85dcadddf08e46a235011a5bd81b66a7d82c662d65f2e1714bea666ef3940cf" Workload="172--239--197--220-k8s-calico--apiserver--5945bdff85--tm7sj-eth0" Jan 23 18:50:54.474745 containerd[1560]: 2026-01-23 18:50:54.442 [INFO][4275] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f85dcadddf08e46a235011a5bd81b66a7d82c662d65f2e1714bea666ef3940cf" Namespace="calico-apiserver" Pod="calico-apiserver-5945bdff85-tm7sj" WorkloadEndpoint="172--239--197--220-k8s-calico--apiserver--5945bdff85--tm7sj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--197--220-k8s-calico--apiserver--5945bdff85--tm7sj-eth0", GenerateName:"calico-apiserver-5945bdff85-", Namespace:"calico-apiserver", SelfLink:"", UID:"a2a6e99e-bac7-4999-a563-b6f5faa05139", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5945bdff85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-197-220", ContainerID:"", Pod:"calico-apiserver-5945bdff85-tm7sj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid1e904a5ecd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:54.474824 containerd[1560]: 2026-01-23 18:50:54.442 [INFO][4275] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.69/32] ContainerID="f85dcadddf08e46a235011a5bd81b66a7d82c662d65f2e1714bea666ef3940cf" Namespace="calico-apiserver" Pod="calico-apiserver-5945bdff85-tm7sj" WorkloadEndpoint="172--239--197--220-k8s-calico--apiserver--5945bdff85--tm7sj-eth0" Jan 23 18:50:54.474824 containerd[1560]: 2026-01-23 18:50:54.442 [INFO][4275] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid1e904a5ecd ContainerID="f85dcadddf08e46a235011a5bd81b66a7d82c662d65f2e1714bea666ef3940cf" Namespace="calico-apiserver" Pod="calico-apiserver-5945bdff85-tm7sj" WorkloadEndpoint="172--239--197--220-k8s-calico--apiserver--5945bdff85--tm7sj-eth0" Jan 23 18:50:54.474824 containerd[1560]: 2026-01-23 18:50:54.452 [INFO][4275] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f85dcadddf08e46a235011a5bd81b66a7d82c662d65f2e1714bea666ef3940cf" Namespace="calico-apiserver" Pod="calico-apiserver-5945bdff85-tm7sj" WorkloadEndpoint="172--239--197--220-k8s-calico--apiserver--5945bdff85--tm7sj-eth0" Jan 23 18:50:54.474919 containerd[1560]: 2026-01-23 18:50:54.454 [INFO][4275] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f85dcadddf08e46a235011a5bd81b66a7d82c662d65f2e1714bea666ef3940cf" Namespace="calico-apiserver" Pod="calico-apiserver-5945bdff85-tm7sj" WorkloadEndpoint="172--239--197--220-k8s-calico--apiserver--5945bdff85--tm7sj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--197--220-k8s-calico--apiserver--5945bdff85--tm7sj-eth0", GenerateName:"calico-apiserver-5945bdff85-", Namespace:"calico-apiserver", SelfLink:"", UID:"a2a6e99e-bac7-4999-a563-b6f5faa05139", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5945bdff85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-197-220", ContainerID:"f85dcadddf08e46a235011a5bd81b66a7d82c662d65f2e1714bea666ef3940cf", Pod:"calico-apiserver-5945bdff85-tm7sj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid1e904a5ecd", MAC:"a6:3c:c5:04:97:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:54.475015 containerd[1560]: 2026-01-23 18:50:54.469 [INFO][4275] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f85dcadddf08e46a235011a5bd81b66a7d82c662d65f2e1714bea666ef3940cf" Namespace="calico-apiserver" Pod="calico-apiserver-5945bdff85-tm7sj" WorkloadEndpoint="172--239--197--220-k8s-calico--apiserver--5945bdff85--tm7sj-eth0" Jan 23 18:50:54.518180 containerd[1560]: time="2026-01-23T18:50:54.518099779Z" level=info msg="connecting to shim f85dcadddf08e46a235011a5bd81b66a7d82c662d65f2e1714bea666ef3940cf" address="unix:///run/containerd/s/1dfa65016b9e187236fee1f0cff4f73549c0071b9f636d04297121d59c0cb666" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:54.538387 systemd-networkd[1442]: cali38e8ede6de5: Link UP Jan 23 18:50:54.541151 systemd-networkd[1442]: cali38e8ede6de5: Gained carrier Jan 23 18:50:54.591800 containerd[1560]: 2026-01-23 18:50:54.220 [INFO][4273] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--197--220-k8s-coredns--674b8bbfcf--5sx8f-eth0 coredns-674b8bbfcf- kube-system ba3daaa2-8f67-4318-8e25-c0cfe3f5ea4d 867 0 2026-01-23 18:50:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-239-197-220 coredns-674b8bbfcf-5sx8f eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali38e8ede6de5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="00755cfb70177141cca98dff932f969c992c3f160c39d3bd241494f8d13653ef" Namespace="kube-system" Pod="coredns-674b8bbfcf-5sx8f" WorkloadEndpoint="172--239--197--220-k8s-coredns--674b8bbfcf--5sx8f-" Jan 23 18:50:54.591800 containerd[1560]: 2026-01-23 18:50:54.220 [INFO][4273] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="00755cfb70177141cca98dff932f969c992c3f160c39d3bd241494f8d13653ef" Namespace="kube-system" Pod="coredns-674b8bbfcf-5sx8f" WorkloadEndpoint="172--239--197--220-k8s-coredns--674b8bbfcf--5sx8f-eth0" Jan 23 18:50:54.591800 containerd[1560]: 2026-01-23 18:50:54.302 [INFO][4318] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="00755cfb70177141cca98dff932f969c992c3f160c39d3bd241494f8d13653ef" HandleID="k8s-pod-network.00755cfb70177141cca98dff932f969c992c3f160c39d3bd241494f8d13653ef" Workload="172--239--197--220-k8s-coredns--674b8bbfcf--5sx8f-eth0" Jan 23 18:50:54.592304 containerd[1560]: 2026-01-23 18:50:54.302 [INFO][4318] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="00755cfb70177141cca98dff932f969c992c3f160c39d3bd241494f8d13653ef" HandleID="k8s-pod-network.00755cfb70177141cca98dff932f969c992c3f160c39d3bd241494f8d13653ef" Workload="172--239--197--220-k8s-coredns--674b8bbfcf--5sx8f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f9f0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-239-197-220", "pod":"coredns-674b8bbfcf-5sx8f", "timestamp":"2026-01-23 18:50:54.302248634 +0000 UTC"}, Hostname:"172-239-197-220", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:50:54.592304 containerd[1560]: 2026-01-23 18:50:54.302 [INFO][4318] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:50:54.592304 containerd[1560]: 2026-01-23 18:50:54.434 [INFO][4318] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:50:54.592304 containerd[1560]: 2026-01-23 18:50:54.435 [INFO][4318] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-197-220' Jan 23 18:50:54.592304 containerd[1560]: 2026-01-23 18:50:54.475 [INFO][4318] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.00755cfb70177141cca98dff932f969c992c3f160c39d3bd241494f8d13653ef" host="172-239-197-220" Jan 23 18:50:54.592304 containerd[1560]: 2026-01-23 18:50:54.495 [INFO][4318] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-197-220" Jan 23 18:50:54.592304 containerd[1560]: 2026-01-23 18:50:54.503 [INFO][4318] ipam/ipam.go 511: Trying affinity for 192.168.81.64/26 host="172-239-197-220" Jan 23 18:50:54.592304 containerd[1560]: 2026-01-23 18:50:54.506 [INFO][4318] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.64/26 host="172-239-197-220" Jan 23 18:50:54.592304 containerd[1560]: 2026-01-23 18:50:54.510 [INFO][4318] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.64/26 host="172-239-197-220" Jan 23 18:50:54.592304 containerd[1560]: 2026-01-23 18:50:54.510 [INFO][4318] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.81.64/26 handle="k8s-pod-network.00755cfb70177141cca98dff932f969c992c3f160c39d3bd241494f8d13653ef" host="172-239-197-220" Jan 23 18:50:54.592530 containerd[1560]: 2026-01-23 18:50:54.512 [INFO][4318] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.00755cfb70177141cca98dff932f969c992c3f160c39d3bd241494f8d13653ef Jan 23 18:50:54.592530 containerd[1560]: 2026-01-23 18:50:54.516 [INFO][4318] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.81.64/26 handle="k8s-pod-network.00755cfb70177141cca98dff932f969c992c3f160c39d3bd241494f8d13653ef" host="172-239-197-220" Jan 23 18:50:54.592530 containerd[1560]: 2026-01-23 18:50:54.524 [INFO][4318] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.81.70/26] block=192.168.81.64/26 handle="k8s-pod-network.00755cfb70177141cca98dff932f969c992c3f160c39d3bd241494f8d13653ef" host="172-239-197-220" Jan 23 18:50:54.592530 containerd[1560]: 2026-01-23 18:50:54.524 [INFO][4318] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.70/26] handle="k8s-pod-network.00755cfb70177141cca98dff932f969c992c3f160c39d3bd241494f8d13653ef" host="172-239-197-220" Jan 23 18:50:54.592530 containerd[1560]: 2026-01-23 18:50:54.524 [INFO][4318] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:50:54.592530 containerd[1560]: 2026-01-23 18:50:54.524 [INFO][4318] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.81.70/26] IPv6=[] ContainerID="00755cfb70177141cca98dff932f969c992c3f160c39d3bd241494f8d13653ef" HandleID="k8s-pod-network.00755cfb70177141cca98dff932f969c992c3f160c39d3bd241494f8d13653ef" Workload="172--239--197--220-k8s-coredns--674b8bbfcf--5sx8f-eth0" Jan 23 18:50:54.592656 containerd[1560]: 2026-01-23 18:50:54.528 [INFO][4273] cni-plugin/k8s.go 418: Populated endpoint ContainerID="00755cfb70177141cca98dff932f969c992c3f160c39d3bd241494f8d13653ef" Namespace="kube-system" Pod="coredns-674b8bbfcf-5sx8f" WorkloadEndpoint="172--239--197--220-k8s-coredns--674b8bbfcf--5sx8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--197--220-k8s-coredns--674b8bbfcf--5sx8f-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ba3daaa2-8f67-4318-8e25-c0cfe3f5ea4d", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-197-220", ContainerID:"", Pod:"coredns-674b8bbfcf-5sx8f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali38e8ede6de5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:54.592656 containerd[1560]: 2026-01-23 18:50:54.528 [INFO][4273] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.70/32] ContainerID="00755cfb70177141cca98dff932f969c992c3f160c39d3bd241494f8d13653ef" Namespace="kube-system" Pod="coredns-674b8bbfcf-5sx8f" WorkloadEndpoint="172--239--197--220-k8s-coredns--674b8bbfcf--5sx8f-eth0" Jan 23 18:50:54.592656 containerd[1560]: 2026-01-23 18:50:54.528 [INFO][4273] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali38e8ede6de5 ContainerID="00755cfb70177141cca98dff932f969c992c3f160c39d3bd241494f8d13653ef" Namespace="kube-system" Pod="coredns-674b8bbfcf-5sx8f" WorkloadEndpoint="172--239--197--220-k8s-coredns--674b8bbfcf--5sx8f-eth0" Jan 23 18:50:54.592656 containerd[1560]: 2026-01-23 18:50:54.542 [INFO][4273] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="00755cfb70177141cca98dff932f969c992c3f160c39d3bd241494f8d13653ef" Namespace="kube-system" Pod="coredns-674b8bbfcf-5sx8f" WorkloadEndpoint="172--239--197--220-k8s-coredns--674b8bbfcf--5sx8f-eth0" Jan 23 18:50:54.592656 containerd[1560]: 2026-01-23 18:50:54.545 [INFO][4273] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="00755cfb70177141cca98dff932f969c992c3f160c39d3bd241494f8d13653ef" Namespace="kube-system" Pod="coredns-674b8bbfcf-5sx8f" WorkloadEndpoint="172--239--197--220-k8s-coredns--674b8bbfcf--5sx8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--197--220-k8s-coredns--674b8bbfcf--5sx8f-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ba3daaa2-8f67-4318-8e25-c0cfe3f5ea4d", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-197-220", ContainerID:"00755cfb70177141cca98dff932f969c992c3f160c39d3bd241494f8d13653ef", Pod:"coredns-674b8bbfcf-5sx8f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali38e8ede6de5", MAC:"2a:71:32:96:f0:79", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:54.592656 containerd[1560]: 2026-01-23 18:50:54.557 [INFO][4273] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="00755cfb70177141cca98dff932f969c992c3f160c39d3bd241494f8d13653ef" Namespace="kube-system" Pod="coredns-674b8bbfcf-5sx8f" WorkloadEndpoint="172--239--197--220-k8s-coredns--674b8bbfcf--5sx8f-eth0" Jan 23 18:50:54.616318 containerd[1560]: time="2026-01-23T18:50:54.616260653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2b2hd,Uid:9e8a8862-2354-40b6-9db2-d22bd07a4dc3,Namespace:calico-system,Attempt:0,} returns sandbox id \"85de8d71282d1de319587bbc180587f738cbdef5e884fc40157e36ae34861318\"" Jan 23 18:50:54.619206 containerd[1560]: time="2026-01-23T18:50:54.619181403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:50:54.619965 systemd[1]: Started cri-containerd-f85dcadddf08e46a235011a5bd81b66a7d82c662d65f2e1714bea666ef3940cf.scope - libcontainer container f85dcadddf08e46a235011a5bd81b66a7d82c662d65f2e1714bea666ef3940cf. Jan 23 18:50:54.623464 containerd[1560]: time="2026-01-23T18:50:54.623374798Z" level=info msg="connecting to shim 00755cfb70177141cca98dff932f969c992c3f160c39d3bd241494f8d13653ef" address="unix:///run/containerd/s/357a4de44d40fbed0c452e2114ad76b4a7eee1f22a7a9748ae09ad15474ea0d6" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:54.677226 systemd[1]: Started cri-containerd-00755cfb70177141cca98dff932f969c992c3f160c39d3bd241494f8d13653ef.scope - libcontainer container 00755cfb70177141cca98dff932f969c992c3f160c39d3bd241494f8d13653ef. Jan 23 18:50:54.717642 containerd[1560]: time="2026-01-23T18:50:54.717586618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5945bdff85-tm7sj,Uid:a2a6e99e-bac7-4999-a563-b6f5faa05139,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f85dcadddf08e46a235011a5bd81b66a7d82c662d65f2e1714bea666ef3940cf\"" Jan 23 18:50:54.754342 containerd[1560]: time="2026-01-23T18:50:54.754221785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5sx8f,Uid:ba3daaa2-8f67-4318-8e25-c0cfe3f5ea4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"00755cfb70177141cca98dff932f969c992c3f160c39d3bd241494f8d13653ef\"" Jan 23 18:50:54.755463 kubelet[2719]: E0123 18:50:54.755305 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:54.760524 containerd[1560]: time="2026-01-23T18:50:54.760485477Z" level=info msg="CreateContainer within sandbox \"00755cfb70177141cca98dff932f969c992c3f160c39d3bd241494f8d13653ef\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 18:50:54.768205 containerd[1560]: time="2026-01-23T18:50:54.768154214Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:50:54.768877 containerd[1560]: time="2026-01-23T18:50:54.768823356Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:50:54.768922 containerd[1560]: time="2026-01-23T18:50:54.768897926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 18:50:54.769060 kubelet[2719]: E0123 18:50:54.769018 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:50:54.769060 kubelet[2719]: E0123 18:50:54.769053 2719 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:50:54.770507 containerd[1560]: time="2026-01-23T18:50:54.770165631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:50:54.771061 kubelet[2719]: E0123 18:50:54.770705 2719 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lpwnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2b2hd_calico-system(9e8a8862-2354-40b6-9db2-d22bd07a4dc3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:50:54.774812 containerd[1560]: time="2026-01-23T18:50:54.774745887Z" level=info msg="Container 41091cbd1cea80077ef3ca103cb2b1f61e74757740eff9c08da56ad4d378bab1: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:50:54.787008 containerd[1560]: time="2026-01-23T18:50:54.786963940Z" level=info msg="CreateContainer within sandbox \"00755cfb70177141cca98dff932f969c992c3f160c39d3bd241494f8d13653ef\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"41091cbd1cea80077ef3ca103cb2b1f61e74757740eff9c08da56ad4d378bab1\"" Jan 23 18:50:54.792635 containerd[1560]: time="2026-01-23T18:50:54.792404369Z" level=info msg="StartContainer for \"41091cbd1cea80077ef3ca103cb2b1f61e74757740eff9c08da56ad4d378bab1\"" Jan 23 18:50:54.794383 containerd[1560]: time="2026-01-23T18:50:54.794353176Z" level=info msg="connecting to shim 41091cbd1cea80077ef3ca103cb2b1f61e74757740eff9c08da56ad4d378bab1" address="unix:///run/containerd/s/357a4de44d40fbed0c452e2114ad76b4a7eee1f22a7a9748ae09ad15474ea0d6" protocol=ttrpc version=3 Jan 23 18:50:54.814929 systemd[1]: Started cri-containerd-41091cbd1cea80077ef3ca103cb2b1f61e74757740eff9c08da56ad4d378bab1.scope - libcontainer container 41091cbd1cea80077ef3ca103cb2b1f61e74757740eff9c08da56ad4d378bab1. Jan 23 18:50:54.857174 containerd[1560]: time="2026-01-23T18:50:54.857110745Z" level=info msg="StartContainer for \"41091cbd1cea80077ef3ca103cb2b1f61e74757740eff9c08da56ad4d378bab1\" returns successfully" Jan 23 18:50:54.904135 containerd[1560]: time="2026-01-23T18:50:54.904029400Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:50:54.905257 containerd[1560]: time="2026-01-23T18:50:54.905214513Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:50:54.905330 containerd[1560]: time="2026-01-23T18:50:54.905305984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:50:54.906146 kubelet[2719]: E0123 18:50:54.906094 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:50:54.906211 kubelet[2719]: E0123 18:50:54.906164 2719 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:50:54.906529 kubelet[2719]: E0123 18:50:54.906462 2719 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-czn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5945bdff85-tm7sj_calico-apiserver(a2a6e99e-bac7-4999-a563-b6f5faa05139): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:50:54.907254 containerd[1560]: time="2026-01-23T18:50:54.907216050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:50:54.908318 kubelet[2719]: E0123 18:50:54.908139 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5945bdff85-tm7sj" podUID="a2a6e99e-bac7-4999-a563-b6f5faa05139" Jan 23 18:50:55.040795 containerd[1560]: time="2026-01-23T18:50:55.040702770Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:50:55.041913 containerd[1560]: time="2026-01-23T18:50:55.041801684Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:50:55.041913 containerd[1560]: time="2026-01-23T18:50:55.041844964Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 18:50:55.042213 kubelet[2719]: E0123 18:50:55.042155 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:50:55.042288 kubelet[2719]: E0123 18:50:55.042235 2719 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:50:55.042984 kubelet[2719]: E0123 18:50:55.042428 2719 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lpwnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2b2hd_calico-system(9e8a8862-2354-40b6-9db2-d22bd07a4dc3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:50:55.043859 kubelet[2719]: E0123 18:50:55.043815 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2b2hd" podUID="9e8a8862-2354-40b6-9db2-d22bd07a4dc3" Jan 23 18:50:55.120808 kubelet[2719]: E0123 18:50:55.120381 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:55.121297 containerd[1560]: time="2026-01-23T18:50:55.121240286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-862kv,Uid:4d671af6-7ef5-45f9-9202-d33ec17c60fa,Namespace:kube-system,Attempt:0,}" Jan 23 18:50:55.260891 systemd-networkd[1442]: cali0f8efb932a8: Link UP Jan 23 18:50:55.261220 systemd-networkd[1442]: cali0f8efb932a8: Gained carrier Jan 23 18:50:55.285520 containerd[1560]: 2026-01-23 18:50:55.178 [INFO][4536] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--197--220-k8s-coredns--674b8bbfcf--862kv-eth0 coredns-674b8bbfcf- kube-system 4d671af6-7ef5-45f9-9202-d33ec17c60fa 863 0 2026-01-23 18:50:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-239-197-220 coredns-674b8bbfcf-862kv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0f8efb932a8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bb00bd1e380a7a24e0de7d301ca238442bf97ce0efdcebf8d6f4768c4c786015" Namespace="kube-system" Pod="coredns-674b8bbfcf-862kv" WorkloadEndpoint="172--239--197--220-k8s-coredns--674b8bbfcf--862kv-" Jan 23 18:50:55.285520 containerd[1560]: 2026-01-23 18:50:55.178 [INFO][4536] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bb00bd1e380a7a24e0de7d301ca238442bf97ce0efdcebf8d6f4768c4c786015" Namespace="kube-system" Pod="coredns-674b8bbfcf-862kv" WorkloadEndpoint="172--239--197--220-k8s-coredns--674b8bbfcf--862kv-eth0" Jan 23 18:50:55.285520 containerd[1560]: 2026-01-23 18:50:55.213 [INFO][4548] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb00bd1e380a7a24e0de7d301ca238442bf97ce0efdcebf8d6f4768c4c786015" HandleID="k8s-pod-network.bb00bd1e380a7a24e0de7d301ca238442bf97ce0efdcebf8d6f4768c4c786015" Workload="172--239--197--220-k8s-coredns--674b8bbfcf--862kv-eth0" Jan 23 18:50:55.285520 containerd[1560]: 2026-01-23 18:50:55.213 [INFO][4548] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bb00bd1e380a7a24e0de7d301ca238442bf97ce0efdcebf8d6f4768c4c786015" HandleID="k8s-pod-network.bb00bd1e380a7a24e0de7d301ca238442bf97ce0efdcebf8d6f4768c4c786015" Workload="172--239--197--220-k8s-coredns--674b8bbfcf--862kv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5010), Attrs:map[string]string{"namespace":"kube-system", "node":"172-239-197-220", "pod":"coredns-674b8bbfcf-862kv", "timestamp":"2026-01-23 18:50:55.21300773 +0000 UTC"}, Hostname:"172-239-197-220", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:50:55.285520 containerd[1560]: 2026-01-23 18:50:55.213 [INFO][4548] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:50:55.285520 containerd[1560]: 2026-01-23 18:50:55.213 [INFO][4548] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:50:55.285520 containerd[1560]: 2026-01-23 18:50:55.213 [INFO][4548] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-197-220' Jan 23 18:50:55.285520 containerd[1560]: 2026-01-23 18:50:55.221 [INFO][4548] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bb00bd1e380a7a24e0de7d301ca238442bf97ce0efdcebf8d6f4768c4c786015" host="172-239-197-220" Jan 23 18:50:55.285520 containerd[1560]: 2026-01-23 18:50:55.226 [INFO][4548] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-197-220" Jan 23 18:50:55.285520 containerd[1560]: 2026-01-23 18:50:55.230 [INFO][4548] ipam/ipam.go 511: Trying affinity for 192.168.81.64/26 host="172-239-197-220" Jan 23 18:50:55.285520 containerd[1560]: 2026-01-23 18:50:55.231 [INFO][4548] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.64/26 host="172-239-197-220" Jan 23 18:50:55.285520 containerd[1560]: 2026-01-23 18:50:55.234 [INFO][4548] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.64/26 host="172-239-197-220" Jan 23 18:50:55.285520 containerd[1560]: 2026-01-23 18:50:55.234 [INFO][4548] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.81.64/26 handle="k8s-pod-network.bb00bd1e380a7a24e0de7d301ca238442bf97ce0efdcebf8d6f4768c4c786015" host="172-239-197-220" Jan 23 18:50:55.285520 containerd[1560]: 2026-01-23 18:50:55.237 [INFO][4548] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bb00bd1e380a7a24e0de7d301ca238442bf97ce0efdcebf8d6f4768c4c786015 Jan 23 18:50:55.285520 containerd[1560]: 2026-01-23 18:50:55.241 [INFO][4548] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.81.64/26 handle="k8s-pod-network.bb00bd1e380a7a24e0de7d301ca238442bf97ce0efdcebf8d6f4768c4c786015" host="172-239-197-220" Jan 23 18:50:55.285520 containerd[1560]: 2026-01-23 18:50:55.251 [INFO][4548] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.81.71/26] block=192.168.81.64/26 handle="k8s-pod-network.bb00bd1e380a7a24e0de7d301ca238442bf97ce0efdcebf8d6f4768c4c786015" host="172-239-197-220" Jan 23 18:50:55.285520 containerd[1560]: 2026-01-23 18:50:55.251 [INFO][4548] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.71/26] handle="k8s-pod-network.bb00bd1e380a7a24e0de7d301ca238442bf97ce0efdcebf8d6f4768c4c786015" host="172-239-197-220" Jan 23 18:50:55.285520 containerd[1560]: 2026-01-23 18:50:55.251 [INFO][4548] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:50:55.285520 containerd[1560]: 2026-01-23 18:50:55.251 [INFO][4548] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.81.71/26] IPv6=[] ContainerID="bb00bd1e380a7a24e0de7d301ca238442bf97ce0efdcebf8d6f4768c4c786015" HandleID="k8s-pod-network.bb00bd1e380a7a24e0de7d301ca238442bf97ce0efdcebf8d6f4768c4c786015" Workload="172--239--197--220-k8s-coredns--674b8bbfcf--862kv-eth0" Jan 23 18:50:55.286527 containerd[1560]: 2026-01-23 18:50:55.255 [INFO][4536] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bb00bd1e380a7a24e0de7d301ca238442bf97ce0efdcebf8d6f4768c4c786015" Namespace="kube-system" Pod="coredns-674b8bbfcf-862kv" WorkloadEndpoint="172--239--197--220-k8s-coredns--674b8bbfcf--862kv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--197--220-k8s-coredns--674b8bbfcf--862kv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4d671af6-7ef5-45f9-9202-d33ec17c60fa", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-197-220", ContainerID:"", Pod:"coredns-674b8bbfcf-862kv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0f8efb932a8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:55.286527 containerd[1560]: 2026-01-23 18:50:55.255 [INFO][4536] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.71/32] ContainerID="bb00bd1e380a7a24e0de7d301ca238442bf97ce0efdcebf8d6f4768c4c786015" Namespace="kube-system" Pod="coredns-674b8bbfcf-862kv" WorkloadEndpoint="172--239--197--220-k8s-coredns--674b8bbfcf--862kv-eth0" Jan 23 18:50:55.286527 containerd[1560]: 2026-01-23 18:50:55.255 [INFO][4536] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f8efb932a8 ContainerID="bb00bd1e380a7a24e0de7d301ca238442bf97ce0efdcebf8d6f4768c4c786015" Namespace="kube-system" Pod="coredns-674b8bbfcf-862kv" WorkloadEndpoint="172--239--197--220-k8s-coredns--674b8bbfcf--862kv-eth0" Jan 23 18:50:55.286527 containerd[1560]: 2026-01-23 18:50:55.260 [INFO][4536] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb00bd1e380a7a24e0de7d301ca238442bf97ce0efdcebf8d6f4768c4c786015" Namespace="kube-system" Pod="coredns-674b8bbfcf-862kv" WorkloadEndpoint="172--239--197--220-k8s-coredns--674b8bbfcf--862kv-eth0" Jan 23 18:50:55.286527 containerd[1560]: 2026-01-23 18:50:55.263 [INFO][4536] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bb00bd1e380a7a24e0de7d301ca238442bf97ce0efdcebf8d6f4768c4c786015" Namespace="kube-system" Pod="coredns-674b8bbfcf-862kv" WorkloadEndpoint="172--239--197--220-k8s-coredns--674b8bbfcf--862kv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--197--220-k8s-coredns--674b8bbfcf--862kv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4d671af6-7ef5-45f9-9202-d33ec17c60fa", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-197-220", ContainerID:"bb00bd1e380a7a24e0de7d301ca238442bf97ce0efdcebf8d6f4768c4c786015", Pod:"coredns-674b8bbfcf-862kv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0f8efb932a8", MAC:"b2:28:54:1b:61:9c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:55.286527 containerd[1560]: 2026-01-23 18:50:55.281 [INFO][4536] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bb00bd1e380a7a24e0de7d301ca238442bf97ce0efdcebf8d6f4768c4c786015" Namespace="kube-system" Pod="coredns-674b8bbfcf-862kv" WorkloadEndpoint="172--239--197--220-k8s-coredns--674b8bbfcf--862kv-eth0" Jan 23 18:50:55.315082 containerd[1560]: time="2026-01-23T18:50:55.314936387Z" level=info msg="connecting to shim bb00bd1e380a7a24e0de7d301ca238442bf97ce0efdcebf8d6f4768c4c786015" address="unix:///run/containerd/s/be4c0906fdf951536310a4ad26f269667dbf15469822cfb27a723cfc10e655e7" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:55.331541 kubelet[2719]: E0123 18:50:55.331001 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:55.338964 kubelet[2719]: E0123 18:50:55.338156 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2b2hd" podUID="9e8a8862-2354-40b6-9db2-d22bd07a4dc3" Jan 23 18:50:55.339198 kubelet[2719]: E0123 18:50:55.339122 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5945bdff85-tm7sj" podUID="a2a6e99e-bac7-4999-a563-b6f5faa05139" Jan 23 18:50:55.361254 kubelet[2719]: I0123 18:50:55.361209 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5sx8f" podStartSLOduration=32.36119233 podStartE2EDuration="32.36119233s" podCreationTimestamp="2026-01-23 18:50:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:50:55.356809936 +0000 UTC m=+38.352644150" watchObservedRunningTime="2026-01-23 18:50:55.36119233 +0000 UTC m=+38.357026544" Jan 23 18:50:55.372002 systemd[1]: Started cri-containerd-bb00bd1e380a7a24e0de7d301ca238442bf97ce0efdcebf8d6f4768c4c786015.scope - libcontainer container bb00bd1e380a7a24e0de7d301ca238442bf97ce0efdcebf8d6f4768c4c786015. Jan 23 18:50:55.474029 containerd[1560]: time="2026-01-23T18:50:55.473947524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-862kv,Uid:4d671af6-7ef5-45f9-9202-d33ec17c60fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb00bd1e380a7a24e0de7d301ca238442bf97ce0efdcebf8d6f4768c4c786015\"" Jan 23 18:50:55.475286 kubelet[2719]: E0123 18:50:55.475225 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:55.480984 containerd[1560]: time="2026-01-23T18:50:55.480936887Z" level=info msg="CreateContainer within sandbox \"bb00bd1e380a7a24e0de7d301ca238442bf97ce0efdcebf8d6f4768c4c786015\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 18:50:55.487098 systemd-networkd[1442]: caliea9b386deaf: Gained IPv6LL Jan 23 18:50:55.494559 containerd[1560]: time="2026-01-23T18:50:55.494241431Z" level=info msg="Container 9f400ed947d9cfc01a2022349f2ea0a9fdf89c730452539e9d3f7a69aa1db39f: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:50:55.503489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1087867382.mount: Deactivated successfully. Jan 23 18:50:55.513172 containerd[1560]: time="2026-01-23T18:50:55.512950243Z" level=info msg="CreateContainer within sandbox \"bb00bd1e380a7a24e0de7d301ca238442bf97ce0efdcebf8d6f4768c4c786015\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9f400ed947d9cfc01a2022349f2ea0a9fdf89c730452539e9d3f7a69aa1db39f\"" Jan 23 18:50:55.515676 containerd[1560]: time="2026-01-23T18:50:55.514407908Z" level=info msg="StartContainer for \"9f400ed947d9cfc01a2022349f2ea0a9fdf89c730452539e9d3f7a69aa1db39f\"" Jan 23 18:50:55.517197 containerd[1560]: time="2026-01-23T18:50:55.517174747Z" level=info msg="connecting to shim 9f400ed947d9cfc01a2022349f2ea0a9fdf89c730452539e9d3f7a69aa1db39f" address="unix:///run/containerd/s/be4c0906fdf951536310a4ad26f269667dbf15469822cfb27a723cfc10e655e7" protocol=ttrpc version=3 Jan 23 18:50:55.558085 systemd[1]: Started cri-containerd-9f400ed947d9cfc01a2022349f2ea0a9fdf89c730452539e9d3f7a69aa1db39f.scope - libcontainer container 9f400ed947d9cfc01a2022349f2ea0a9fdf89c730452539e9d3f7a69aa1db39f. Jan 23 18:50:55.613289 containerd[1560]: time="2026-01-23T18:50:55.613159535Z" level=info msg="StartContainer for \"9f400ed947d9cfc01a2022349f2ea0a9fdf89c730452539e9d3f7a69aa1db39f\" returns successfully" Jan 23 18:50:56.062998 systemd-networkd[1442]: calid1e904a5ecd: Gained IPv6LL Jan 23 18:50:56.120249 containerd[1560]: time="2026-01-23T18:50:56.120194203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5945bdff85-k9mzc,Uid:d93cd3f7-0e65-42d4-b5ec-feb6f561b2d9,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:50:56.254045 systemd-networkd[1442]: calia858d7d1e2f: Link UP Jan 23 18:50:56.256041 systemd-networkd[1442]: calia858d7d1e2f: Gained carrier Jan 23 18:50:56.257090 systemd-networkd[1442]: cali38e8ede6de5: Gained IPv6LL Jan 23 18:50:56.274396 containerd[1560]: 2026-01-23 18:50:56.167 [INFO][4650] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--197--220-k8s-calico--apiserver--5945bdff85--k9mzc-eth0 calico-apiserver-5945bdff85- calico-apiserver d93cd3f7-0e65-42d4-b5ec-feb6f561b2d9 871 0 2026-01-23 18:50:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5945bdff85 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-239-197-220 calico-apiserver-5945bdff85-k9mzc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia858d7d1e2f [] [] }} ContainerID="bde4f6ea800092ef9a4669c65c4548de123bf12f580dc101250b49656432f625" Namespace="calico-apiserver" Pod="calico-apiserver-5945bdff85-k9mzc" WorkloadEndpoint="172--239--197--220-k8s-calico--apiserver--5945bdff85--k9mzc-" Jan 23 18:50:56.274396 containerd[1560]: 2026-01-23 18:50:56.167 [INFO][4650] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bde4f6ea800092ef9a4669c65c4548de123bf12f580dc101250b49656432f625" Namespace="calico-apiserver" Pod="calico-apiserver-5945bdff85-k9mzc" WorkloadEndpoint="172--239--197--220-k8s-calico--apiserver--5945bdff85--k9mzc-eth0" Jan 23 18:50:56.274396 containerd[1560]: 2026-01-23 18:50:56.204 [INFO][4663] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bde4f6ea800092ef9a4669c65c4548de123bf12f580dc101250b49656432f625" HandleID="k8s-pod-network.bde4f6ea800092ef9a4669c65c4548de123bf12f580dc101250b49656432f625" Workload="172--239--197--220-k8s-calico--apiserver--5945bdff85--k9mzc-eth0" Jan 23 18:50:56.274396 containerd[1560]: 2026-01-23 18:50:56.204 [INFO][4663] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bde4f6ea800092ef9a4669c65c4548de123bf12f580dc101250b49656432f625" HandleID="k8s-pod-network.bde4f6ea800092ef9a4669c65c4548de123bf12f580dc101250b49656432f625" Workload="172--239--197--220-k8s-calico--apiserver--5945bdff85--k9mzc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f200), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-239-197-220", "pod":"calico-apiserver-5945bdff85-k9mzc", "timestamp":"2026-01-23 18:50:56.204164647 +0000 UTC"}, Hostname:"172-239-197-220", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:50:56.274396 containerd[1560]: 2026-01-23 18:50:56.204 [INFO][4663] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:50:56.274396 containerd[1560]: 2026-01-23 18:50:56.204 [INFO][4663] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:50:56.274396 containerd[1560]: 2026-01-23 18:50:56.204 [INFO][4663] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-197-220' Jan 23 18:50:56.274396 containerd[1560]: 2026-01-23 18:50:56.214 [INFO][4663] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bde4f6ea800092ef9a4669c65c4548de123bf12f580dc101250b49656432f625" host="172-239-197-220" Jan 23 18:50:56.274396 containerd[1560]: 2026-01-23 18:50:56.220 [INFO][4663] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-197-220" Jan 23 18:50:56.274396 containerd[1560]: 2026-01-23 18:50:56.226 [INFO][4663] ipam/ipam.go 511: Trying affinity for 192.168.81.64/26 host="172-239-197-220" Jan 23 18:50:56.274396 containerd[1560]: 2026-01-23 18:50:56.228 [INFO][4663] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.64/26 host="172-239-197-220" Jan 23 18:50:56.274396 containerd[1560]: 2026-01-23 18:50:56.231 [INFO][4663] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.64/26 host="172-239-197-220" Jan 23 18:50:56.274396 containerd[1560]: 2026-01-23 18:50:56.231 [INFO][4663] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.81.64/26 handle="k8s-pod-network.bde4f6ea800092ef9a4669c65c4548de123bf12f580dc101250b49656432f625" host="172-239-197-220" Jan 23 18:50:56.274396 containerd[1560]: 2026-01-23 18:50:56.233 [INFO][4663] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bde4f6ea800092ef9a4669c65c4548de123bf12f580dc101250b49656432f625 Jan 23 18:50:56.274396 containerd[1560]: 2026-01-23 18:50:56.238 [INFO][4663] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.81.64/26 handle="k8s-pod-network.bde4f6ea800092ef9a4669c65c4548de123bf12f580dc101250b49656432f625" host="172-239-197-220" Jan 23 18:50:56.274396 containerd[1560]: 2026-01-23 18:50:56.244 [INFO][4663] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.81.72/26] block=192.168.81.64/26 handle="k8s-pod-network.bde4f6ea800092ef9a4669c65c4548de123bf12f580dc101250b49656432f625" host="172-239-197-220" Jan 23 18:50:56.274396 containerd[1560]: 2026-01-23 18:50:56.244 [INFO][4663] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.72/26] handle="k8s-pod-network.bde4f6ea800092ef9a4669c65c4548de123bf12f580dc101250b49656432f625" host="172-239-197-220" Jan 23 18:50:56.274396 containerd[1560]: 2026-01-23 18:50:56.244 [INFO][4663] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:50:56.274396 containerd[1560]: 2026-01-23 18:50:56.244 [INFO][4663] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.81.72/26] IPv6=[] ContainerID="bde4f6ea800092ef9a4669c65c4548de123bf12f580dc101250b49656432f625" HandleID="k8s-pod-network.bde4f6ea800092ef9a4669c65c4548de123bf12f580dc101250b49656432f625" Workload="172--239--197--220-k8s-calico--apiserver--5945bdff85--k9mzc-eth0" Jan 23 18:50:56.275054 containerd[1560]: 2026-01-23 18:50:56.248 [INFO][4650] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bde4f6ea800092ef9a4669c65c4548de123bf12f580dc101250b49656432f625" Namespace="calico-apiserver" Pod="calico-apiserver-5945bdff85-k9mzc" WorkloadEndpoint="172--239--197--220-k8s-calico--apiserver--5945bdff85--k9mzc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--197--220-k8s-calico--apiserver--5945bdff85--k9mzc-eth0", GenerateName:"calico-apiserver-5945bdff85-", Namespace:"calico-apiserver", SelfLink:"", UID:"d93cd3f7-0e65-42d4-b5ec-feb6f561b2d9", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5945bdff85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-197-220", ContainerID:"", Pod:"calico-apiserver-5945bdff85-k9mzc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia858d7d1e2f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:56.275054 containerd[1560]: 2026-01-23 18:50:56.248 [INFO][4650] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.72/32] ContainerID="bde4f6ea800092ef9a4669c65c4548de123bf12f580dc101250b49656432f625" Namespace="calico-apiserver" Pod="calico-apiserver-5945bdff85-k9mzc" WorkloadEndpoint="172--239--197--220-k8s-calico--apiserver--5945bdff85--k9mzc-eth0" Jan 23 18:50:56.275054 containerd[1560]: 2026-01-23 18:50:56.248 [INFO][4650] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia858d7d1e2f ContainerID="bde4f6ea800092ef9a4669c65c4548de123bf12f580dc101250b49656432f625" Namespace="calico-apiserver" Pod="calico-apiserver-5945bdff85-k9mzc" WorkloadEndpoint="172--239--197--220-k8s-calico--apiserver--5945bdff85--k9mzc-eth0" Jan 23 18:50:56.275054 containerd[1560]: 2026-01-23 18:50:56.259 [INFO][4650] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bde4f6ea800092ef9a4669c65c4548de123bf12f580dc101250b49656432f625" Namespace="calico-apiserver" Pod="calico-apiserver-5945bdff85-k9mzc" WorkloadEndpoint="172--239--197--220-k8s-calico--apiserver--5945bdff85--k9mzc-eth0" Jan 23 18:50:56.275054 containerd[1560]: 2026-01-23 18:50:56.261 [INFO][4650] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bde4f6ea800092ef9a4669c65c4548de123bf12f580dc101250b49656432f625" Namespace="calico-apiserver" Pod="calico-apiserver-5945bdff85-k9mzc" WorkloadEndpoint="172--239--197--220-k8s-calico--apiserver--5945bdff85--k9mzc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--197--220-k8s-calico--apiserver--5945bdff85--k9mzc-eth0", GenerateName:"calico-apiserver-5945bdff85-", Namespace:"calico-apiserver", SelfLink:"", UID:"d93cd3f7-0e65-42d4-b5ec-feb6f561b2d9", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5945bdff85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-197-220", ContainerID:"bde4f6ea800092ef9a4669c65c4548de123bf12f580dc101250b49656432f625", Pod:"calico-apiserver-5945bdff85-k9mzc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia858d7d1e2f", MAC:"ba:43:b9:ff:34:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:56.275054 containerd[1560]: 2026-01-23 18:50:56.269 [INFO][4650] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bde4f6ea800092ef9a4669c65c4548de123bf12f580dc101250b49656432f625" Namespace="calico-apiserver" Pod="calico-apiserver-5945bdff85-k9mzc" WorkloadEndpoint="172--239--197--220-k8s-calico--apiserver--5945bdff85--k9mzc-eth0" Jan 23 18:50:56.314761 containerd[1560]: time="2026-01-23T18:50:56.314569003Z" level=info msg="connecting to shim bde4f6ea800092ef9a4669c65c4548de123bf12f580dc101250b49656432f625" address="unix:///run/containerd/s/38120e2174c0a1656996b5c2c152ab91dc93d6b8cf10cdb6d9701bf7bb4c0722" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:56.342504 kubelet[2719]: E0123 18:50:56.342442 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:56.347953 kubelet[2719]: E0123 18:50:56.343130 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:56.348173 kubelet[2719]: E0123 18:50:56.347950 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5945bdff85-tm7sj" podUID="a2a6e99e-bac7-4999-a563-b6f5faa05139" Jan 23 18:50:56.349060 kubelet[2719]: E0123 18:50:56.348990 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2b2hd" podUID="9e8a8862-2354-40b6-9db2-d22bd07a4dc3" Jan 23 18:50:56.361137 systemd[1]: Started cri-containerd-bde4f6ea800092ef9a4669c65c4548de123bf12f580dc101250b49656432f625.scope - libcontainer container bde4f6ea800092ef9a4669c65c4548de123bf12f580dc101250b49656432f625. Jan 23 18:50:56.409575 kubelet[2719]: I0123 18:50:56.407113 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-862kv" podStartSLOduration=33.407086853 podStartE2EDuration="33.407086853s" podCreationTimestamp="2026-01-23 18:50:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:50:56.383035247 +0000 UTC m=+39.378869481" watchObservedRunningTime="2026-01-23 18:50:56.407086853 +0000 UTC m=+39.402921057" Jan 23 18:50:56.476269 containerd[1560]: time="2026-01-23T18:50:56.476162559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5945bdff85-k9mzc,Uid:d93cd3f7-0e65-42d4-b5ec-feb6f561b2d9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"bde4f6ea800092ef9a4669c65c4548de123bf12f580dc101250b49656432f625\"" Jan 23 18:50:56.479218 containerd[1560]: time="2026-01-23T18:50:56.479066898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:50:56.637651 containerd[1560]: time="2026-01-23T18:50:56.637411925Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:50:56.638469 containerd[1560]: time="2026-01-23T18:50:56.638422057Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:50:56.638532 containerd[1560]: time="2026-01-23T18:50:56.638517028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:50:56.638719 kubelet[2719]: E0123 18:50:56.638663 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:50:56.639189 kubelet[2719]: E0123 18:50:56.638720 2719 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:50:56.639189 kubelet[2719]: E0123 18:50:56.638958 2719 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rtp9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5945bdff85-k9mzc_calico-apiserver(d93cd3f7-0e65-42d4-b5ec-feb6f561b2d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:50:56.640571 kubelet[2719]: E0123 18:50:56.640527 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5945bdff85-k9mzc" podUID="d93cd3f7-0e65-42d4-b5ec-feb6f561b2d9" Jan 23 18:50:57.087034 systemd-networkd[1442]: cali0f8efb932a8: Gained IPv6LL Jan 23 18:50:57.346167 kubelet[2719]: E0123 18:50:57.346003 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:57.348297 kubelet[2719]: E0123 18:50:57.347235 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5945bdff85-k9mzc" podUID="d93cd3f7-0e65-42d4-b5ec-feb6f561b2d9" Jan 23 18:50:57.348297 kubelet[2719]: E0123 18:50:57.347963 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:57.663894 systemd-networkd[1442]: calia858d7d1e2f: Gained IPv6LL Jan 23 18:50:58.349257 kubelet[2719]: E0123 18:50:58.348818 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:50:58.349954 kubelet[2719]: E0123 18:50:58.349928 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5945bdff85-k9mzc" podUID="d93cd3f7-0e65-42d4-b5ec-feb6f561b2d9" Jan 23 18:51:01.124407 containerd[1560]: time="2026-01-23T18:51:01.124351716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:51:01.258485 containerd[1560]: time="2026-01-23T18:51:01.258439959Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:51:01.259282 containerd[1560]: time="2026-01-23T18:51:01.259232490Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:51:01.259344 containerd[1560]: time="2026-01-23T18:51:01.259251190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 18:51:01.259518 kubelet[2719]: E0123 18:51:01.259479 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:51:01.260009 kubelet[2719]: E0123 18:51:01.259536 2719 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:51:01.260954 kubelet[2719]: E0123 18:51:01.260885 2719 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c2dcaa451a7d498d98335548b24bb005,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k4cbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-679f64db96-vrbwz_calico-system(b303f953-7602-41b2-af3d-dcb8c6c81cfb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:51:01.263478 containerd[1560]: time="2026-01-23T18:51:01.263455070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:51:01.401865 containerd[1560]: time="2026-01-23T18:51:01.401726013Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:51:01.402592 containerd[1560]: time="2026-01-23T18:51:01.402542295Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:51:01.402751 containerd[1560]: time="2026-01-23T18:51:01.402622956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 18:51:01.402894 kubelet[2719]: E0123 18:51:01.402813 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:51:01.402894 kubelet[2719]: E0123 18:51:01.402880 2719 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:51:01.403056 kubelet[2719]: E0123 18:51:01.403012 2719 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k4cbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-679f64db96-vrbwz_calico-system(b303f953-7602-41b2-af3d-dcb8c6c81cfb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:51:01.404455 kubelet[2719]: E0123 18:51:01.404396 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-679f64db96-vrbwz" podUID="b303f953-7602-41b2-af3d-dcb8c6c81cfb" Jan 23 18:51:08.121106 containerd[1560]: time="2026-01-23T18:51:08.121011122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:51:08.255366 containerd[1560]: time="2026-01-23T18:51:08.255211282Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:51:08.256512 containerd[1560]: time="2026-01-23T18:51:08.256423363Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:51:08.256512 containerd[1560]: time="2026-01-23T18:51:08.256444793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 18:51:08.256757 kubelet[2719]: E0123 18:51:08.256701 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:51:08.256757 kubelet[2719]: E0123 18:51:08.256754 2719 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:51:08.257558 kubelet[2719]: E0123 18:51:08.257096 2719 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8drl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-tn6z6_calico-system(39b71af8-c04c-43e8-b2a3-f5d78af0b0fc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:51:08.258852 kubelet[2719]: E0123 18:51:08.258746 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tn6z6" podUID="39b71af8-c04c-43e8-b2a3-f5d78af0b0fc" Jan 23 18:51:09.123966 containerd[1560]: time="2026-01-23T18:51:09.123913822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:51:09.263197 containerd[1560]: time="2026-01-23T18:51:09.263151000Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:51:09.264211 containerd[1560]: time="2026-01-23T18:51:09.264165462Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:51:09.264268 containerd[1560]: time="2026-01-23T18:51:09.264251472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 18:51:09.264733 kubelet[2719]: E0123 18:51:09.264418 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:51:09.264733 kubelet[2719]: E0123 18:51:09.264517 2719 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:51:09.264733 kubelet[2719]: E0123 18:51:09.264678 2719 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lpwnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2b2hd_calico-system(9e8a8862-2354-40b6-9db2-d22bd07a4dc3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:51:09.267446 containerd[1560]: time="2026-01-23T18:51:09.267376697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:51:09.394573 containerd[1560]: time="2026-01-23T18:51:09.394446115Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:51:09.395596 containerd[1560]: time="2026-01-23T18:51:09.395562827Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:51:09.395688 containerd[1560]: time="2026-01-23T18:51:09.395631757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 18:51:09.396150 kubelet[2719]: E0123 18:51:09.396086 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:51:09.396202 kubelet[2719]: E0123 18:51:09.396148 2719 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:51:09.396349 kubelet[2719]: E0123 18:51:09.396295 2719 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lpwnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2b2hd_calico-system(9e8a8862-2354-40b6-9db2-d22bd07a4dc3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:51:09.398031 kubelet[2719]: E0123 18:51:09.397961 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2b2hd" podUID="9e8a8862-2354-40b6-9db2-d22bd07a4dc3" Jan 23 18:51:10.121373 containerd[1560]: time="2026-01-23T18:51:10.121303457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:51:10.280014 containerd[1560]: time="2026-01-23T18:51:10.279918766Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:51:10.280846 containerd[1560]: time="2026-01-23T18:51:10.280808507Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:51:10.280934 containerd[1560]: time="2026-01-23T18:51:10.280897087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 18:51:10.281115 kubelet[2719]: E0123 18:51:10.281056 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:51:10.281115 kubelet[2719]: E0123 18:51:10.281108 2719 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:51:10.281628 kubelet[2719]: E0123 18:51:10.281352 2719 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8lgnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-677f685ddf-wzbds_calico-system(12d6ca3a-1fd4-422a-92d6-048bdc9d3706): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:51:10.282340 containerd[1560]: time="2026-01-23T18:51:10.282314570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:51:10.283395 kubelet[2719]: E0123 18:51:10.283342 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-677f685ddf-wzbds" podUID="12d6ca3a-1fd4-422a-92d6-048bdc9d3706" Jan 23 18:51:10.420627 containerd[1560]: time="2026-01-23T18:51:10.420392006Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:51:10.421319 containerd[1560]: time="2026-01-23T18:51:10.421284577Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:51:10.421448 containerd[1560]: time="2026-01-23T18:51:10.421352497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:51:10.421617 kubelet[2719]: E0123 18:51:10.421545 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:51:10.421666 kubelet[2719]: E0123 18:51:10.421622 2719 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:51:10.422153 kubelet[2719]: E0123 18:51:10.422024 2719 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-czn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5945bdff85-tm7sj_calico-apiserver(a2a6e99e-bac7-4999-a563-b6f5faa05139): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:51:10.423224 kubelet[2719]: E0123 18:51:10.423189 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5945bdff85-tm7sj" podUID="a2a6e99e-bac7-4999-a563-b6f5faa05139" Jan 23 18:51:12.121064 containerd[1560]: time="2026-01-23T18:51:12.121022778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:51:12.264727 containerd[1560]: time="2026-01-23T18:51:12.264554545Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:51:12.265709 containerd[1560]: time="2026-01-23T18:51:12.265616367Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:51:12.265709 containerd[1560]: time="2026-01-23T18:51:12.265652807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:51:12.265945 kubelet[2719]: E0123 18:51:12.265900 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:51:12.266343 kubelet[2719]: E0123 18:51:12.265953 2719 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:51:12.266343 kubelet[2719]: E0123 18:51:12.266099 2719 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rtp9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5945bdff85-k9mzc_calico-apiserver(d93cd3f7-0e65-42d4-b5ec-feb6f561b2d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:51:12.267660 kubelet[2719]: E0123 18:51:12.267624 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5945bdff85-k9mzc" podUID="d93cd3f7-0e65-42d4-b5ec-feb6f561b2d9" Jan 23 18:51:17.122602 kubelet[2719]: E0123 18:51:17.122488 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-679f64db96-vrbwz" podUID="b303f953-7602-41b2-af3d-dcb8c6c81cfb" Jan 23 18:51:18.378921 kubelet[2719]: E0123 18:51:18.378580 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:51:20.120944 kubelet[2719]: E0123 18:51:20.120872 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tn6z6" podUID="39b71af8-c04c-43e8-b2a3-f5d78af0b0fc" Jan 23 18:51:22.122535 kubelet[2719]: E0123 18:51:22.122450 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-677f685ddf-wzbds" podUID="12d6ca3a-1fd4-422a-92d6-048bdc9d3706" Jan 23 18:51:23.123001 kubelet[2719]: E0123 18:51:23.122941 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5945bdff85-tm7sj" podUID="a2a6e99e-bac7-4999-a563-b6f5faa05139" Jan 23 18:51:23.126913 kubelet[2719]: E0123 18:51:23.125923 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2b2hd" podUID="9e8a8862-2354-40b6-9db2-d22bd07a4dc3" Jan 23 18:51:24.122670 kubelet[2719]: E0123 18:51:24.122595 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5945bdff85-k9mzc" podUID="d93cd3f7-0e65-42d4-b5ec-feb6f561b2d9" Jan 23 18:51:31.121615 kubelet[2719]: E0123 18:51:31.121059 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:51:32.121206 containerd[1560]: time="2026-01-23T18:51:32.121123025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:51:32.267735 containerd[1560]: time="2026-01-23T18:51:32.267680104Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:51:32.268971 containerd[1560]: time="2026-01-23T18:51:32.268851536Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:51:32.268971 containerd[1560]: time="2026-01-23T18:51:32.268943008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 18:51:32.269227 kubelet[2719]: E0123 18:51:32.269172 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:51:32.269679 kubelet[2719]: E0123 18:51:32.269645 2719 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:51:32.269992 kubelet[2719]: E0123 18:51:32.269937 2719 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8drl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-tn6z6_calico-system(39b71af8-c04c-43e8-b2a3-f5d78af0b0fc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:51:32.270229 containerd[1560]: time="2026-01-23T18:51:32.270148596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:51:32.271397 kubelet[2719]: E0123 18:51:32.271365 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tn6z6" podUID="39b71af8-c04c-43e8-b2a3-f5d78af0b0fc" Jan 23 18:51:32.453208 containerd[1560]: time="2026-01-23T18:51:32.452918816Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:51:32.454182 containerd[1560]: time="2026-01-23T18:51:32.454082248Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:51:32.454182 containerd[1560]: time="2026-01-23T18:51:32.454154471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 18:51:32.455034 kubelet[2719]: E0123 18:51:32.454434 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:51:32.455034 kubelet[2719]: E0123 18:51:32.454483 2719 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:51:32.455034 kubelet[2719]: E0123 18:51:32.454608 2719 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c2dcaa451a7d498d98335548b24bb005,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k4cbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-679f64db96-vrbwz_calico-system(b303f953-7602-41b2-af3d-dcb8c6c81cfb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:51:32.457304 containerd[1560]: time="2026-01-23T18:51:32.457264784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:51:32.665253 containerd[1560]: time="2026-01-23T18:51:32.663928259Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:51:32.665253 containerd[1560]: time="2026-01-23T18:51:32.664843994Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:51:32.665253 containerd[1560]: time="2026-01-23T18:51:32.664936506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 18:51:32.667943 kubelet[2719]: E0123 18:51:32.665179 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:51:32.667943 kubelet[2719]: E0123 18:51:32.665246 2719 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:51:32.667943 kubelet[2719]: E0123 18:51:32.665462 2719 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k4cbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-679f64db96-vrbwz_calico-system(b303f953-7602-41b2-af3d-dcb8c6c81cfb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:51:32.667943 kubelet[2719]: E0123 18:51:32.666700 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-679f64db96-vrbwz" podUID="b303f953-7602-41b2-af3d-dcb8c6c81cfb" Jan 23 18:51:33.123884 containerd[1560]: time="2026-01-23T18:51:33.123835193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:51:33.260734 containerd[1560]: time="2026-01-23T18:51:33.260684285Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:51:33.261871 containerd[1560]: time="2026-01-23T18:51:33.261810815Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:51:33.261996 containerd[1560]: time="2026-01-23T18:51:33.261929434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 18:51:33.262296 kubelet[2719]: E0123 18:51:33.262180 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:51:33.262461 kubelet[2719]: E0123 18:51:33.262420 2719 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:51:33.262862 kubelet[2719]: E0123 18:51:33.262803 2719 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8lgnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-677f685ddf-wzbds_calico-system(12d6ca3a-1fd4-422a-92d6-048bdc9d3706): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:51:33.264301 kubelet[2719]: E0123 18:51:33.264253 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-677f685ddf-wzbds" podUID="12d6ca3a-1fd4-422a-92d6-048bdc9d3706" Jan 23 18:51:35.124968 containerd[1560]: time="2026-01-23T18:51:35.124903606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:51:35.289445 containerd[1560]: time="2026-01-23T18:51:35.289137837Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:51:35.290325 containerd[1560]: time="2026-01-23T18:51:35.290193407Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:51:35.290533 containerd[1560]: time="2026-01-23T18:51:35.290311887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 18:51:35.291062 kubelet[2719]: E0123 18:51:35.290998 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:51:35.292287 kubelet[2719]: E0123 18:51:35.291735 2719 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:51:35.292441 kubelet[2719]: E0123 18:51:35.292178 2719 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lpwnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2b2hd_calico-system(9e8a8862-2354-40b6-9db2-d22bd07a4dc3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:51:35.296359 containerd[1560]: time="2026-01-23T18:51:35.296178749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:51:35.430791 containerd[1560]: time="2026-01-23T18:51:35.429902259Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:51:35.431204 containerd[1560]: time="2026-01-23T18:51:35.431153804Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:51:35.431300 containerd[1560]: time="2026-01-23T18:51:35.431241046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 18:51:35.431509 kubelet[2719]: E0123 18:51:35.431452 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:51:35.431576 kubelet[2719]: E0123 18:51:35.431533 2719 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:51:35.431787 kubelet[2719]: E0123 18:51:35.431720 2719 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lpwnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2b2hd_calico-system(9e8a8862-2354-40b6-9db2-d22bd07a4dc3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:51:35.433449 kubelet[2719]: E0123 18:51:35.433346 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2b2hd" podUID="9e8a8862-2354-40b6-9db2-d22bd07a4dc3" Jan 23 18:51:38.120934 containerd[1560]: time="2026-01-23T18:51:38.120874724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:51:38.264891 containerd[1560]: time="2026-01-23T18:51:38.264827943Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:51:38.265694 containerd[1560]: time="2026-01-23T18:51:38.265638529Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:51:38.265815 containerd[1560]: time="2026-01-23T18:51:38.265727002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:51:38.265991 kubelet[2719]: E0123 18:51:38.265940 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:51:38.266506 kubelet[2719]: E0123 18:51:38.266007 2719 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:51:38.266506 kubelet[2719]: E0123 18:51:38.266176 2719 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-czn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5945bdff85-tm7sj_calico-apiserver(a2a6e99e-bac7-4999-a563-b6f5faa05139): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:51:38.268512 kubelet[2719]: E0123 18:51:38.268012 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5945bdff85-tm7sj" podUID="a2a6e99e-bac7-4999-a563-b6f5faa05139" Jan 23 18:51:39.129996 containerd[1560]: time="2026-01-23T18:51:39.129451070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:51:39.265720 containerd[1560]: time="2026-01-23T18:51:39.265653319Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:51:39.266992 containerd[1560]: time="2026-01-23T18:51:39.266870086Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:51:39.266992 containerd[1560]: time="2026-01-23T18:51:39.266966370Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:51:39.267354 kubelet[2719]: E0123 18:51:39.267240 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:51:39.267354 kubelet[2719]: E0123 18:51:39.267312 2719 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:51:39.268475 kubelet[2719]: E0123 18:51:39.268305 2719 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rtp9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5945bdff85-k9mzc_calico-apiserver(d93cd3f7-0e65-42d4-b5ec-feb6f561b2d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:51:39.269663 kubelet[2719]: E0123 18:51:39.269621 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5945bdff85-k9mzc" podUID="d93cd3f7-0e65-42d4-b5ec-feb6f561b2d9" Jan 23 18:51:44.120097 kubelet[2719]: E0123 18:51:44.120043 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:51:45.123217 kubelet[2719]: E0123 18:51:45.123155 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-679f64db96-vrbwz" podUID="b303f953-7602-41b2-af3d-dcb8c6c81cfb" Jan 23 18:51:46.121089 kubelet[2719]: E0123 18:51:46.121045 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tn6z6" podUID="39b71af8-c04c-43e8-b2a3-f5d78af0b0fc" Jan 23 18:51:46.123273 kubelet[2719]: E0123 18:51:46.123222 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2b2hd" podUID="9e8a8862-2354-40b6-9db2-d22bd07a4dc3" Jan 23 18:51:47.122138 kubelet[2719]: E0123 18:51:47.121110 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:51:48.126105 kubelet[2719]: E0123 18:51:48.125872 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-677f685ddf-wzbds" podUID="12d6ca3a-1fd4-422a-92d6-048bdc9d3706" Jan 23 18:51:50.120463 kubelet[2719]: E0123 18:51:50.120165 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:51:50.122320 kubelet[2719]: E0123 18:51:50.122284 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5945bdff85-tm7sj" podUID="a2a6e99e-bac7-4999-a563-b6f5faa05139" Jan 23 18:51:51.121915 kubelet[2719]: E0123 18:51:51.121225 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5945bdff85-k9mzc" podUID="d93cd3f7-0e65-42d4-b5ec-feb6f561b2d9" Jan 23 18:51:54.120243 kubelet[2719]: E0123 18:51:54.120007 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:51:57.125063 kubelet[2719]: E0123 18:51:57.124900 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tn6z6" podUID="39b71af8-c04c-43e8-b2a3-f5d78af0b0fc" Jan 23 18:51:57.126249 kubelet[2719]: E0123 18:51:57.125626 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-679f64db96-vrbwz" podUID="b303f953-7602-41b2-af3d-dcb8c6c81cfb" Jan 23 18:51:58.120690 kubelet[2719]: E0123 18:51:58.120303 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:51:58.124894 kubelet[2719]: E0123 18:51:58.124858 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2b2hd" podUID="9e8a8862-2354-40b6-9db2-d22bd07a4dc3" Jan 23 18:52:02.121964 kubelet[2719]: E0123 18:52:02.121897 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-677f685ddf-wzbds" podUID="12d6ca3a-1fd4-422a-92d6-048bdc9d3706" Jan 23 18:52:02.122568 kubelet[2719]: E0123 18:52:02.122274 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5945bdff85-tm7sj" podUID="a2a6e99e-bac7-4999-a563-b6f5faa05139" Jan 23 18:52:06.124321 kubelet[2719]: E0123 18:52:06.124261 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5945bdff85-k9mzc" podUID="d93cd3f7-0e65-42d4-b5ec-feb6f561b2d9" Jan 23 18:52:08.120789 kubelet[2719]: E0123 18:52:08.119978 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tn6z6" podUID="39b71af8-c04c-43e8-b2a3-f5d78af0b0fc" Jan 23 18:52:10.122642 kubelet[2719]: E0123 18:52:10.122528 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2b2hd" podUID="9e8a8862-2354-40b6-9db2-d22bd07a4dc3" Jan 23 18:52:11.125269 kubelet[2719]: E0123 18:52:11.125155 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-679f64db96-vrbwz" podUID="b303f953-7602-41b2-af3d-dcb8c6c81cfb" Jan 23 18:52:13.122792 kubelet[2719]: E0123 18:52:13.122362 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5945bdff85-tm7sj" podUID="a2a6e99e-bac7-4999-a563-b6f5faa05139" Jan 23 18:52:13.124216 kubelet[2719]: E0123 18:52:13.124166 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:52:15.326205 systemd[1]: Started sshd@7-172.239.197.220:22-68.220.241.50:37454.service - OpenSSH per-connection server daemon (68.220.241.50:37454). Jan 23 18:52:15.504052 sshd[4826]: Accepted publickey for core from 68.220.241.50 port 37454 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 18:52:15.508009 sshd-session[4826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:15.513863 systemd-logind[1531]: New session 8 of user core. Jan 23 18:52:15.522907 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 18:52:15.775066 sshd[4829]: Connection closed by 68.220.241.50 port 37454 Jan 23 18:52:15.775534 sshd-session[4826]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:15.780422 systemd-logind[1531]: Session 8 logged out. Waiting for processes to exit. Jan 23 18:52:15.781339 systemd[1]: sshd@7-172.239.197.220:22-68.220.241.50:37454.service: Deactivated successfully. Jan 23 18:52:15.785544 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 18:52:15.790936 systemd-logind[1531]: Removed session 8. Jan 23 18:52:16.122144 containerd[1560]: time="2026-01-23T18:52:16.121986918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:52:16.314626 containerd[1560]: time="2026-01-23T18:52:16.314537037Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:52:16.316781 containerd[1560]: time="2026-01-23T18:52:16.316713290Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:52:16.317671 containerd[1560]: time="2026-01-23T18:52:16.316845866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 18:52:16.318085 kubelet[2719]: E0123 18:52:16.317991 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:52:16.318428 kubelet[2719]: E0123 18:52:16.318097 2719 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:52:16.318941 kubelet[2719]: E0123 18:52:16.318861 2719 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8lgnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-677f685ddf-wzbds_calico-system(12d6ca3a-1fd4-422a-92d6-048bdc9d3706): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:52:16.320129 kubelet[2719]: E0123 18:52:16.320078 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-677f685ddf-wzbds" podUID="12d6ca3a-1fd4-422a-92d6-048bdc9d3706" Jan 23 18:52:19.122050 kubelet[2719]: E0123 18:52:19.121353 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5945bdff85-k9mzc" podUID="d93cd3f7-0e65-42d4-b5ec-feb6f561b2d9" Jan 23 18:52:20.810821 systemd[1]: Started sshd@8-172.239.197.220:22-68.220.241.50:37460.service - OpenSSH per-connection server daemon (68.220.241.50:37460). Jan 23 18:52:20.982877 sshd[4871]: Accepted publickey for core from 68.220.241.50 port 37460 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 18:52:20.984762 sshd-session[4871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:20.997338 systemd-logind[1531]: New session 9 of user core. Jan 23 18:52:21.001418 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 18:52:21.234475 sshd[4874]: Connection closed by 68.220.241.50 port 37460 Jan 23 18:52:21.236028 sshd-session[4871]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:21.243266 systemd[1]: sshd@8-172.239.197.220:22-68.220.241.50:37460.service: Deactivated successfully. Jan 23 18:52:21.247460 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 18:52:21.249535 systemd-logind[1531]: Session 9 logged out. Waiting for processes to exit. Jan 23 18:52:21.251458 systemd-logind[1531]: Removed session 9. Jan 23 18:52:22.121431 containerd[1560]: time="2026-01-23T18:52:22.121053854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:52:22.249340 containerd[1560]: time="2026-01-23T18:52:22.249168374Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:52:22.250322 containerd[1560]: time="2026-01-23T18:52:22.250232694Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:52:22.250604 containerd[1560]: time="2026-01-23T18:52:22.250587684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 18:52:22.250809 kubelet[2719]: E0123 18:52:22.250753 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:52:22.252797 kubelet[2719]: E0123 18:52:22.251208 2719 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:52:22.252797 kubelet[2719]: E0123 18:52:22.251356 2719 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8drl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-tn6z6_calico-system(39b71af8-c04c-43e8-b2a3-f5d78af0b0fc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:52:22.253000 kubelet[2719]: E0123 18:52:22.252976 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tn6z6" podUID="39b71af8-c04c-43e8-b2a3-f5d78af0b0fc" Jan 23 18:52:23.123090 containerd[1560]: time="2026-01-23T18:52:23.122197632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:52:23.315834 containerd[1560]: time="2026-01-23T18:52:23.315739295Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:52:23.317097 containerd[1560]: time="2026-01-23T18:52:23.317002290Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:52:23.317097 containerd[1560]: time="2026-01-23T18:52:23.317038249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 18:52:23.317423 kubelet[2719]: E0123 18:52:23.317366 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:52:23.317860 kubelet[2719]: E0123 18:52:23.317434 2719 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:52:23.317860 kubelet[2719]: E0123 18:52:23.317608 2719 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lpwnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2b2hd_calico-system(9e8a8862-2354-40b6-9db2-d22bd07a4dc3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:52:23.320306 containerd[1560]: time="2026-01-23T18:52:23.320083467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:52:23.471705 containerd[1560]: time="2026-01-23T18:52:23.470997395Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:52:23.471915 containerd[1560]: time="2026-01-23T18:52:23.471723166Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:52:23.471915 containerd[1560]: time="2026-01-23T18:52:23.471816363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 18:52:23.472164 kubelet[2719]: E0123 18:52:23.472043 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:52:23.472164 kubelet[2719]: E0123 18:52:23.472113 2719 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:52:23.472707 kubelet[2719]: E0123 18:52:23.472669 2719 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lpwnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2b2hd_calico-system(9e8a8862-2354-40b6-9db2-d22bd07a4dc3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:52:23.474074 kubelet[2719]: E0123 18:52:23.474034 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2b2hd" podUID="9e8a8862-2354-40b6-9db2-d22bd07a4dc3" Jan 23 18:52:25.122500 containerd[1560]: time="2026-01-23T18:52:25.122119429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:52:25.250942 containerd[1560]: time="2026-01-23T18:52:25.250885436Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:52:25.252250 containerd[1560]: time="2026-01-23T18:52:25.252175972Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:52:25.252327 containerd[1560]: time="2026-01-23T18:52:25.252276219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 18:52:25.252558 kubelet[2719]: E0123 18:52:25.252489 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:52:25.252558 kubelet[2719]: E0123 18:52:25.252549 2719 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:52:25.253209 kubelet[2719]: E0123 18:52:25.252704 2719 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c2dcaa451a7d498d98335548b24bb005,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k4cbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-679f64db96-vrbwz_calico-system(b303f953-7602-41b2-af3d-dcb8c6c81cfb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:52:25.254891 containerd[1560]: time="2026-01-23T18:52:25.254636208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:52:25.448415 containerd[1560]: time="2026-01-23T18:52:25.448287688Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:52:25.450786 containerd[1560]: time="2026-01-23T18:52:25.449212535Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:52:25.450786 containerd[1560]: time="2026-01-23T18:52:25.449311972Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 18:52:25.451010 kubelet[2719]: E0123 18:52:25.450974 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:52:25.451151 kubelet[2719]: E0123 18:52:25.451109 2719 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:52:25.451669 kubelet[2719]: E0123 18:52:25.451622 2719 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k4cbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-679f64db96-vrbwz_calico-system(b303f953-7602-41b2-af3d-dcb8c6c81cfb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:52:25.452966 kubelet[2719]: E0123 18:52:25.452922 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-679f64db96-vrbwz" podUID="b303f953-7602-41b2-af3d-dcb8c6c81cfb" Jan 23 18:52:26.120622 containerd[1560]: time="2026-01-23T18:52:26.120550822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:52:26.265931 systemd[1]: Started sshd@9-172.239.197.220:22-68.220.241.50:51304.service - OpenSSH per-connection server daemon (68.220.241.50:51304). Jan 23 18:52:26.272296 containerd[1560]: time="2026-01-23T18:52:26.272248041Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:52:26.273729 containerd[1560]: time="2026-01-23T18:52:26.273682935Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:52:26.273805 containerd[1560]: time="2026-01-23T18:52:26.273790122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:52:26.274006 kubelet[2719]: E0123 18:52:26.273967 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:52:26.274312 kubelet[2719]: E0123 18:52:26.274035 2719 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:52:26.274613 kubelet[2719]: E0123 18:52:26.274566 2719 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-czn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5945bdff85-tm7sj_calico-apiserver(a2a6e99e-bac7-4999-a563-b6f5faa05139): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:52:26.277202 kubelet[2719]: E0123 18:52:26.276427 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5945bdff85-tm7sj" podUID="a2a6e99e-bac7-4999-a563-b6f5faa05139" Jan 23 18:52:26.434038 sshd[4909]: Accepted publickey for core from 68.220.241.50 port 51304 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 18:52:26.435555 sshd-session[4909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:26.441845 systemd-logind[1531]: New session 10 of user core. Jan 23 18:52:26.448948 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 18:52:26.634720 sshd[4912]: Connection closed by 68.220.241.50 port 51304 Jan 23 18:52:26.636757 sshd-session[4909]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:26.643315 systemd[1]: sshd@9-172.239.197.220:22-68.220.241.50:51304.service: Deactivated successfully. Jan 23 18:52:26.645952 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 18:52:26.647657 systemd-logind[1531]: Session 10 logged out. Waiting for processes to exit. Jan 23 18:52:26.649264 systemd-logind[1531]: Removed session 10. Jan 23 18:52:26.671353 systemd[1]: Started sshd@10-172.239.197.220:22-68.220.241.50:51314.service - OpenSSH per-connection server daemon (68.220.241.50:51314). Jan 23 18:52:26.870054 sshd[4924]: Accepted publickey for core from 68.220.241.50 port 51314 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 18:52:26.871984 sshd-session[4924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:26.876859 systemd-logind[1531]: New session 11 of user core. Jan 23 18:52:26.881923 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 18:52:27.151790 sshd[4930]: Connection closed by 68.220.241.50 port 51314 Jan 23 18:52:27.154955 sshd-session[4924]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:27.159854 systemd[1]: sshd@10-172.239.197.220:22-68.220.241.50:51314.service: Deactivated successfully. Jan 23 18:52:27.165664 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 18:52:27.170310 systemd-logind[1531]: Session 11 logged out. Waiting for processes to exit. Jan 23 18:52:27.172604 systemd-logind[1531]: Removed session 11. Jan 23 18:52:27.191245 systemd[1]: Started sshd@11-172.239.197.220:22-68.220.241.50:51326.service - OpenSSH per-connection server daemon (68.220.241.50:51326). Jan 23 18:52:27.381827 sshd[4941]: Accepted publickey for core from 68.220.241.50 port 51326 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 18:52:27.383083 sshd-session[4941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:27.390686 systemd-logind[1531]: New session 12 of user core. Jan 23 18:52:27.400716 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 18:52:27.595059 sshd[4944]: Connection closed by 68.220.241.50 port 51326 Jan 23 18:52:27.596884 sshd-session[4941]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:27.602715 systemd-logind[1531]: Session 12 logged out. Waiting for processes to exit. Jan 23 18:52:27.607213 systemd[1]: sshd@11-172.239.197.220:22-68.220.241.50:51326.service: Deactivated successfully. Jan 23 18:52:27.610934 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 18:52:27.615518 systemd-logind[1531]: Removed session 12. Jan 23 18:52:28.120765 kubelet[2719]: E0123 18:52:28.120714 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-677f685ddf-wzbds" podUID="12d6ca3a-1fd4-422a-92d6-048bdc9d3706" Jan 23 18:52:32.120858 containerd[1560]: time="2026-01-23T18:52:32.120634041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:52:32.282510 containerd[1560]: time="2026-01-23T18:52:32.282220796Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:52:32.284584 containerd[1560]: time="2026-01-23T18:52:32.284489983Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:52:32.284799 containerd[1560]: time="2026-01-23T18:52:32.284552242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:52:32.285137 kubelet[2719]: E0123 18:52:32.285061 2719 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:52:32.285137 kubelet[2719]: E0123 18:52:32.285117 2719 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:52:32.286010 kubelet[2719]: E0123 18:52:32.285691 2719 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rtp9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5945bdff85-k9mzc_calico-apiserver(d93cd3f7-0e65-42d4-b5ec-feb6f561b2d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:52:32.287114 kubelet[2719]: E0123 18:52:32.287053 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5945bdff85-k9mzc" podUID="d93cd3f7-0e65-42d4-b5ec-feb6f561b2d9" Jan 23 18:52:32.632084 systemd[1]: Started sshd@12-172.239.197.220:22-68.220.241.50:41742.service - OpenSSH per-connection server daemon (68.220.241.50:41742). Jan 23 18:52:32.804740 sshd[4956]: Accepted publickey for core from 68.220.241.50 port 41742 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 18:52:32.806513 sshd-session[4956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:32.813112 systemd-logind[1531]: New session 13 of user core. Jan 23 18:52:32.818219 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 18:52:33.035848 sshd[4959]: Connection closed by 68.220.241.50 port 41742 Jan 23 18:52:33.036914 sshd-session[4956]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:33.045036 systemd[1]: sshd@12-172.239.197.220:22-68.220.241.50:41742.service: Deactivated successfully. Jan 23 18:52:33.048229 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 18:52:33.054145 systemd-logind[1531]: Session 13 logged out. Waiting for processes to exit. Jan 23 18:52:33.055280 systemd-logind[1531]: Removed session 13. Jan 23 18:52:33.073972 systemd[1]: Started sshd@13-172.239.197.220:22-68.220.241.50:41752.service - OpenSSH per-connection server daemon (68.220.241.50:41752). Jan 23 18:52:33.269403 sshd[4971]: Accepted publickey for core from 68.220.241.50 port 41752 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 18:52:33.271051 sshd-session[4971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:33.278892 systemd-logind[1531]: New session 14 of user core. Jan 23 18:52:33.284975 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 18:52:33.624233 sshd[4974]: Connection closed by 68.220.241.50 port 41752 Jan 23 18:52:33.623293 sshd-session[4971]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:33.630080 systemd-logind[1531]: Session 14 logged out. Waiting for processes to exit. Jan 23 18:52:33.630704 systemd[1]: sshd@13-172.239.197.220:22-68.220.241.50:41752.service: Deactivated successfully. Jan 23 18:52:33.634373 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 18:52:33.639124 systemd-logind[1531]: Removed session 14. Jan 23 18:52:33.652846 systemd[1]: Started sshd@14-172.239.197.220:22-68.220.241.50:41762.service - OpenSSH per-connection server daemon (68.220.241.50:41762). Jan 23 18:52:33.829129 sshd[4984]: Accepted publickey for core from 68.220.241.50 port 41762 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 18:52:33.831935 sshd-session[4984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:33.837316 systemd-logind[1531]: New session 15 of user core. Jan 23 18:52:33.844335 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 18:52:34.583818 sshd[4987]: Connection closed by 68.220.241.50 port 41762 Jan 23 18:52:34.586628 sshd-session[4984]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:34.593604 systemd-logind[1531]: Session 15 logged out. Waiting for processes to exit. Jan 23 18:52:34.597288 systemd[1]: sshd@14-172.239.197.220:22-68.220.241.50:41762.service: Deactivated successfully. Jan 23 18:52:34.603508 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 18:52:34.626640 systemd-logind[1531]: Removed session 15. Jan 23 18:52:34.627818 systemd[1]: Started sshd@15-172.239.197.220:22-68.220.241.50:41778.service - OpenSSH per-connection server daemon (68.220.241.50:41778). Jan 23 18:52:34.829141 sshd[5004]: Accepted publickey for core from 68.220.241.50 port 41778 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 18:52:34.831284 sshd-session[5004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:34.841248 systemd-logind[1531]: New session 16 of user core. Jan 23 18:52:34.845999 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 18:52:35.128177 kubelet[2719]: E0123 18:52:35.128020 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2b2hd" podUID="9e8a8862-2354-40b6-9db2-d22bd07a4dc3" Jan 23 18:52:35.129129 kubelet[2719]: E0123 18:52:35.128932 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tn6z6" podUID="39b71af8-c04c-43e8-b2a3-f5d78af0b0fc" Jan 23 18:52:35.209798 sshd[5007]: Connection closed by 68.220.241.50 port 41778 Jan 23 18:52:35.211103 sshd-session[5004]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:35.221026 systemd-logind[1531]: Session 16 logged out. Waiting for processes to exit. Jan 23 18:52:35.223972 systemd[1]: sshd@15-172.239.197.220:22-68.220.241.50:41778.service: Deactivated successfully. Jan 23 18:52:35.229407 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 18:52:35.243958 systemd-logind[1531]: Removed session 16. Jan 23 18:52:35.247090 systemd[1]: Started sshd@16-172.239.197.220:22-68.220.241.50:41786.service - OpenSSH per-connection server daemon (68.220.241.50:41786). Jan 23 18:52:35.424412 sshd[5017]: Accepted publickey for core from 68.220.241.50 port 41786 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 18:52:35.427046 sshd-session[5017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:35.437298 systemd-logind[1531]: New session 17 of user core. Jan 23 18:52:35.441904 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 18:52:35.659381 sshd[5020]: Connection closed by 68.220.241.50 port 41786 Jan 23 18:52:35.660442 sshd-session[5017]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:35.669031 systemd-logind[1531]: Session 17 logged out. Waiting for processes to exit. Jan 23 18:52:35.671876 systemd[1]: sshd@16-172.239.197.220:22-68.220.241.50:41786.service: Deactivated successfully. Jan 23 18:52:35.675249 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 18:52:35.679330 systemd-logind[1531]: Removed session 17. Jan 23 18:52:39.124298 kubelet[2719]: E0123 18:52:39.124233 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:52:39.126170 kubelet[2719]: E0123 18:52:39.126087 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-679f64db96-vrbwz" podUID="b303f953-7602-41b2-af3d-dcb8c6c81cfb" Jan 23 18:52:40.121950 kubelet[2719]: E0123 18:52:40.121634 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5945bdff85-tm7sj" podUID="a2a6e99e-bac7-4999-a563-b6f5faa05139" Jan 23 18:52:40.695097 systemd[1]: Started sshd@17-172.239.197.220:22-68.220.241.50:41800.service - OpenSSH per-connection server daemon (68.220.241.50:41800). Jan 23 18:52:40.876511 sshd[5033]: Accepted publickey for core from 68.220.241.50 port 41800 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 18:52:40.879480 sshd-session[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:40.887599 systemd-logind[1531]: New session 18 of user core. Jan 23 18:52:40.897959 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 18:52:41.111540 sshd[5036]: Connection closed by 68.220.241.50 port 41800 Jan 23 18:52:41.114006 sshd-session[5033]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:41.121349 systemd[1]: sshd@17-172.239.197.220:22-68.220.241.50:41800.service: Deactivated successfully. Jan 23 18:52:41.124115 kubelet[2719]: E0123 18:52:41.124085 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-677f685ddf-wzbds" podUID="12d6ca3a-1fd4-422a-92d6-048bdc9d3706" Jan 23 18:52:41.130331 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 18:52:41.135138 systemd-logind[1531]: Session 18 logged out. Waiting for processes to exit. Jan 23 18:52:41.137208 systemd-logind[1531]: Removed session 18. Jan 23 18:52:43.122601 kubelet[2719]: E0123 18:52:43.122286 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5945bdff85-k9mzc" podUID="d93cd3f7-0e65-42d4-b5ec-feb6f561b2d9" Jan 23 18:52:46.122542 kubelet[2719]: E0123 18:52:46.122089 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2b2hd" podUID="9e8a8862-2354-40b6-9db2-d22bd07a4dc3" Jan 23 18:52:46.146027 systemd[1]: Started sshd@18-172.239.197.220:22-68.220.241.50:54880.service - OpenSSH per-connection server daemon (68.220.241.50:54880). Jan 23 18:52:46.319564 sshd[5049]: Accepted publickey for core from 68.220.241.50 port 54880 ssh2: RSA SHA256:abrAq+mLx3KJWsoVA8ogTpEDR6WfvmgkFb4xEptZjdk Jan 23 18:52:46.322303 sshd-session[5049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:46.328102 systemd-logind[1531]: New session 19 of user core. Jan 23 18:52:46.336945 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 18:52:46.528116 sshd[5052]: Connection closed by 68.220.241.50 port 54880 Jan 23 18:52:46.531974 sshd-session[5049]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:46.539907 systemd[1]: sshd@18-172.239.197.220:22-68.220.241.50:54880.service: Deactivated successfully. Jan 23 18:52:46.546100 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 18:52:46.551614 systemd-logind[1531]: Session 19 logged out. Waiting for processes to exit. Jan 23 18:52:46.554410 systemd-logind[1531]: Removed session 19. Jan 23 18:52:48.120821 kubelet[2719]: E0123 18:52:48.119855 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Jan 23 18:52:48.123137 kubelet[2719]: E0123 18:52:48.123080 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tn6z6" podUID="39b71af8-c04c-43e8-b2a3-f5d78af0b0fc"