Jan 17 00:21:38.009931 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:21:38.009972 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:21:38.009992 kernel: BIOS-provided physical RAM map: Jan 17 00:21:38.010004 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 17 00:21:38.010016 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jan 17 00:21:38.010028 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Jan 17 00:21:38.010042 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Jan 17 00:21:38.010056 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 17 00:21:38.010069 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 17 00:21:38.010084 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 17 00:21:38.010096 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 17 00:21:38.010109 kernel: NX (Execute Disable) protection: active Jan 17 00:21:38.010121 kernel: APIC: Static calls initialized Jan 17 00:21:38.010134 kernel: efi: EFI v2.7 by EDK II Jan 17 00:21:38.010168 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Jan 17 00:21:38.010185 kernel: SMBIOS 2.7 present. Jan 17 00:21:38.010198 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 17 00:21:38.010212 kernel: Hypervisor detected: KVM Jan 17 00:21:38.010226 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:21:38.010239 kernel: kvm-clock: using sched offset of 3900402824 cycles Jan 17 00:21:38.010254 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:21:38.010267 kernel: tsc: Detected 2499.996 MHz processor Jan 17 00:21:38.010282 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:21:38.010296 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:21:38.010310 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 17 00:21:38.010328 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 17 00:21:38.010342 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:21:38.010356 kernel: Using GB pages for direct mapping Jan 17 00:21:38.010369 kernel: Secure boot disabled Jan 17 00:21:38.010383 kernel: ACPI: Early table checksum verification disabled Jan 17 00:21:38.010397 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jan 17 00:21:38.010411 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jan 17 00:21:38.010425 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 17 00:21:38.010439 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 17 00:21:38.010457 kernel: ACPI: FACS 0x00000000789D0000 000040 Jan 17 00:21:38.010470 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 17 00:21:38.010484 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 17 00:21:38.010498 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 17 00:21:38.010512 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 17 00:21:38.010527 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 17 00:21:38.010548 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 17 00:21:38.010566 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 17 00:21:38.010581 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jan 17 00:21:38.010597 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jan 17 00:21:38.010611 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jan 17 00:21:38.010626 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jan 17 00:21:38.010642 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jan 17 00:21:38.010657 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jan 17 00:21:38.010677 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jan 17 00:21:38.010692 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jan 17 00:21:38.010706 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jan 17 00:21:38.010721 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jan 17 00:21:38.010736 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jan 17 00:21:38.010750 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jan 17 00:21:38.010765 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 00:21:38.010780 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 00:21:38.010796 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 17 00:21:38.010814 kernel: NUMA: Initialized distance table, cnt=1 Jan 17 00:21:38.010829 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Jan 17 00:21:38.010844 kernel: Zone ranges: Jan 17 00:21:38.010859 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:21:38.010874 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jan 17 00:21:38.010889 kernel: Normal empty Jan 17 00:21:38.010904 kernel: Movable zone start for each node Jan 17 00:21:38.010918 kernel: Early memory node ranges Jan 17 00:21:38.010933 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 17 00:21:38.010952 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jan 17 00:21:38.010967 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jan 17 00:21:38.010982 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jan 17 00:21:38.010997 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:21:38.011012 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 17 00:21:38.011027 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 17 00:21:38.011043 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jan 17 00:21:38.011058 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 17 00:21:38.011074 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:21:38.011093 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 17 00:21:38.011108 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:21:38.011122 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:21:38.011137 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:21:38.014056 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:21:38.014074 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:21:38.014089 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 00:21:38.014103 kernel: TSC deadline timer available Jan 17 00:21:38.014118 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 00:21:38.014153 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 00:21:38.014165 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jan 17 00:21:38.014177 kernel: Booting paravirtualized kernel on KVM Jan 17 00:21:38.014189 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:21:38.014201 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 00:21:38.014213 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 17 00:21:38.014225 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 17 00:21:38.014237 kernel: pcpu-alloc: [0] 0 1 Jan 17 00:21:38.014248 kernel: kvm-guest: PV spinlocks enabled Jan 17 00:21:38.014264 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 00:21:38.014279 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:21:38.014306 kernel: random: crng init done Jan 17 00:21:38.014321 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:21:38.014336 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 00:21:38.014351 kernel: Fallback order for Node 0: 0 Jan 17 00:21:38.014365 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Jan 17 00:21:38.014379 kernel: Policy zone: DMA32 Jan 17 00:21:38.014398 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:21:38.014414 kernel: Memory: 1874628K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 162916K reserved, 0K cma-reserved) Jan 17 00:21:38.014429 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:21:38.014444 kernel: Kernel/User page tables isolation: enabled Jan 17 00:21:38.014459 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:21:38.014474 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:21:38.014487 kernel: Dynamic Preempt: voluntary Jan 17 00:21:38.014502 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:21:38.014518 kernel: rcu: RCU event tracing is enabled. Jan 17 00:21:38.014536 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:21:38.014552 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:21:38.014567 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:21:38.014581 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:21:38.014594 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:21:38.014609 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:21:38.014624 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 00:21:38.014639 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:21:38.014669 kernel: Console: colour dummy device 80x25 Jan 17 00:21:38.014685 kernel: printk: console [tty0] enabled Jan 17 00:21:38.014700 kernel: printk: console [ttyS0] enabled Jan 17 00:21:38.014715 kernel: ACPI: Core revision 20230628 Jan 17 00:21:38.014734 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 17 00:21:38.014750 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:21:38.014766 kernel: x2apic enabled Jan 17 00:21:38.014782 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:21:38.014797 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 17 00:21:38.014815 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jan 17 00:21:38.014830 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 17 00:21:38.014847 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 17 00:21:38.014862 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:21:38.014876 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 00:21:38.014891 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:21:38.014907 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 17 00:21:38.014923 kernel: RETBleed: Vulnerable Jan 17 00:21:38.014938 kernel: Speculative Store Bypass: Vulnerable Jan 17 00:21:38.014954 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:21:38.014973 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:21:38.014989 kernel: GDS: Unknown: Dependent on hypervisor status Jan 17 00:21:38.015004 kernel: active return thunk: its_return_thunk Jan 17 00:21:38.015020 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 17 00:21:38.015036 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:21:38.015052 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:21:38.015067 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:21:38.015083 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 17 00:21:38.015099 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 17 00:21:38.015115 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 17 00:21:38.015131 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 17 00:21:38.015162 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 17 00:21:38.015176 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 17 00:21:38.015189 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:21:38.015203 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 17 00:21:38.015215 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 17 00:21:38.015228 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 17 00:21:38.015243 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 17 00:21:38.015258 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 17 00:21:38.015272 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 17 00:21:38.015287 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 17 00:21:38.015302 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:21:38.015320 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:21:38.015335 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:21:38.015349 kernel: landlock: Up and running. Jan 17 00:21:38.015362 kernel: SELinux: Initializing. Jan 17 00:21:38.015375 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 00:21:38.015389 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 00:21:38.015404 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x4) Jan 17 00:21:38.015418 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:21:38.015432 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:21:38.015447 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:21:38.015461 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 17 00:21:38.015479 kernel: signal: max sigframe size: 3632 Jan 17 00:21:38.015493 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:21:38.015508 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:21:38.015523 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:21:38.015538 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:21:38.015552 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:21:38.015567 kernel: .... node #0, CPUs: #1 Jan 17 00:21:38.015582 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 17 00:21:38.015598 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 17 00:21:38.015618 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:21:38.015634 kernel: smpboot: Max logical packages: 1 Jan 17 00:21:38.015651 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jan 17 00:21:38.015667 kernel: devtmpfs: initialized Jan 17 00:21:38.015683 kernel: x86/mm: Memory block size: 128MB Jan 17 00:21:38.015699 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jan 17 00:21:38.015716 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:21:38.015732 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:21:38.015752 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:21:38.015769 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:21:38.015785 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:21:38.015802 kernel: audit: type=2000 audit(1768609297.085:1): state=initialized audit_enabled=0 res=1 Jan 17 00:21:38.015818 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:21:38.015834 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:21:38.015850 kernel: cpuidle: using governor menu Jan 17 00:21:38.015866 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:21:38.015883 kernel: dca service started, version 1.12.1 Jan 17 00:21:38.015903 kernel: PCI: Using configuration type 1 for base access Jan 17 00:21:38.015920 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:21:38.015937 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:21:38.015954 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:21:38.015970 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:21:38.015985 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:21:38.015998 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:21:38.016013 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:21:38.016029 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:21:38.016047 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 17 00:21:38.016062 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:21:38.016076 kernel: ACPI: Interpreter enabled Jan 17 00:21:38.016089 kernel: ACPI: PM: (supports S0 S5) Jan 17 00:21:38.016104 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:21:38.016118 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:21:38.016132 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 00:21:38.019523 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 17 00:21:38.019551 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:21:38.019817 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:21:38.019962 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 00:21:38.020098 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 00:21:38.020119 kernel: acpiphp: Slot [3] registered Jan 17 00:21:38.020135 kernel: acpiphp: Slot [4] registered Jan 17 00:21:38.020168 kernel: acpiphp: Slot [5] registered Jan 17 00:21:38.020182 kernel: acpiphp: Slot [6] registered Jan 17 00:21:38.020194 kernel: acpiphp: Slot [7] registered Jan 17 00:21:38.020213 kernel: acpiphp: Slot [8] registered Jan 17 00:21:38.020226 kernel: acpiphp: Slot [9] registered Jan 17 00:21:38.020240 kernel: acpiphp: Slot [10] registered Jan 17 00:21:38.020256 kernel: acpiphp: Slot [11] registered Jan 17 00:21:38.020271 kernel: acpiphp: Slot [12] registered Jan 17 00:21:38.020284 kernel: acpiphp: Slot [13] registered Jan 17 00:21:38.020297 kernel: acpiphp: Slot [14] registered Jan 17 00:21:38.020312 kernel: acpiphp: Slot [15] registered Jan 17 00:21:38.020328 kernel: acpiphp: Slot [16] registered Jan 17 00:21:38.020348 kernel: acpiphp: Slot [17] registered Jan 17 00:21:38.020364 kernel: acpiphp: Slot [18] registered Jan 17 00:21:38.020380 kernel: acpiphp: Slot [19] registered Jan 17 00:21:38.020396 kernel: acpiphp: Slot [20] registered Jan 17 00:21:38.020411 kernel: acpiphp: Slot [21] registered Jan 17 00:21:38.020425 kernel: acpiphp: Slot [22] registered Jan 17 00:21:38.020441 kernel: acpiphp: Slot [23] registered Jan 17 00:21:38.020455 kernel: acpiphp: Slot [24] registered Jan 17 00:21:38.020468 kernel: acpiphp: Slot [25] registered Jan 17 00:21:38.020482 kernel: acpiphp: Slot [26] registered Jan 17 00:21:38.020499 kernel: acpiphp: Slot [27] registered Jan 17 00:21:38.020513 kernel: acpiphp: Slot [28] registered Jan 17 00:21:38.020527 kernel: acpiphp: Slot [29] registered Jan 17 00:21:38.020541 kernel: acpiphp: Slot [30] registered Jan 17 00:21:38.020555 kernel: acpiphp: Slot [31] registered Jan 17 00:21:38.020569 kernel: PCI host bridge to bus 0000:00 Jan 17 00:21:38.020747 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:21:38.020864 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:21:38.020982 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:21:38.021094 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 17 00:21:38.022203 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jan 17 00:21:38.022351 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:21:38.022516 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 00:21:38.022666 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 17 00:21:38.022827 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 17 00:21:38.022964 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 17 00:21:38.023098 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 17 00:21:38.024332 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 17 00:21:38.024490 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 17 00:21:38.024625 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 17 00:21:38.024772 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 17 00:21:38.024915 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 17 00:21:38.025061 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 17 00:21:38.025265 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Jan 17 00:21:38.025402 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 17 00:21:38.025537 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Jan 17 00:21:38.025670 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 00:21:38.025813 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 17 00:21:38.025953 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Jan 17 00:21:38.026096 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 17 00:21:38.026247 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Jan 17 00:21:38.026269 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:21:38.026286 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:21:38.026302 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:21:38.026319 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:21:38.026335 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 00:21:38.026356 kernel: iommu: Default domain type: Translated Jan 17 00:21:38.026372 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:21:38.026389 kernel: efivars: Registered efivars operations Jan 17 00:21:38.026405 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:21:38.026422 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:21:38.026439 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jan 17 00:21:38.026454 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jan 17 00:21:38.026589 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 17 00:21:38.026768 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 17 00:21:38.026928 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 00:21:38.026948 kernel: vgaarb: loaded Jan 17 00:21:38.026964 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 17 00:21:38.026979 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 17 00:21:38.026994 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:21:38.027009 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:21:38.027024 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:21:38.027039 kernel: pnp: PnP ACPI init Jan 17 00:21:38.027059 kernel: pnp: PnP ACPI: found 5 devices Jan 17 00:21:38.027075 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:21:38.027091 kernel: NET: Registered PF_INET protocol family Jan 17 00:21:38.027106 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:21:38.027121 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 00:21:38.027137 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:21:38.027178 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:21:38.027194 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 00:21:38.027209 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 00:21:38.027228 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 00:21:38.027244 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 00:21:38.027259 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:21:38.027274 kernel: NET: Registered PF_XDP protocol family Jan 17 00:21:38.027403 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:21:38.027521 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:21:38.027638 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:21:38.027754 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 17 00:21:38.027873 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jan 17 00:21:38.028009 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 00:21:38.028028 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:21:38.028043 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 00:21:38.028058 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 17 00:21:38.028074 kernel: clocksource: Switched to clocksource tsc Jan 17 00:21:38.028089 kernel: Initialise system trusted keyrings Jan 17 00:21:38.028104 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 00:21:38.028119 kernel: Key type asymmetric registered Jan 17 00:21:38.028137 kernel: Asymmetric key parser 'x509' registered Jan 17 00:21:38.028240 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:21:38.028255 kernel: io scheduler mq-deadline registered Jan 17 00:21:38.028270 kernel: io scheduler kyber registered Jan 17 00:21:38.028285 kernel: io scheduler bfq registered Jan 17 00:21:38.028300 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:21:38.028315 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:21:38.028330 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:21:38.028345 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:21:38.028364 kernel: i8042: Warning: Keylock active Jan 17 00:21:38.028379 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:21:38.028394 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:21:38.028536 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 17 00:21:38.028659 kernel: rtc_cmos 00:00: registered as rtc0 Jan 17 00:21:38.028790 kernel: rtc_cmos 00:00: setting system clock to 2026-01-17T00:21:37 UTC (1768609297) Jan 17 00:21:38.028910 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 17 00:21:38.028932 kernel: intel_pstate: CPU model not supported Jan 17 00:21:38.028948 kernel: efifb: probing for efifb Jan 17 00:21:38.028963 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Jan 17 00:21:38.028978 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jan 17 00:21:38.028993 kernel: efifb: scrolling: redraw Jan 17 00:21:38.029008 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 17 00:21:38.029023 kernel: Console: switching to colour frame buffer device 100x37 Jan 17 00:21:38.029038 kernel: fb0: EFI VGA frame buffer device Jan 17 00:21:38.029053 kernel: pstore: Using crash dump compression: deflate Jan 17 00:21:38.029068 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 00:21:38.029087 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:21:38.029102 kernel: Segment Routing with IPv6 Jan 17 00:21:38.029117 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:21:38.029132 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:21:38.029169 kernel: Key type dns_resolver registered Jan 17 00:21:38.029185 kernel: IPI shorthand broadcast: enabled Jan 17 00:21:38.029225 kernel: sched_clock: Marking stable (577002083, 202079095)->(896391279, -117310101) Jan 17 00:21:38.029244 kernel: registered taskstats version 1 Jan 17 00:21:38.029260 kernel: Loading compiled-in X.509 certificates Jan 17 00:21:38.029278 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:21:38.029294 kernel: Key type .fscrypt registered Jan 17 00:21:38.029310 kernel: Key type fscrypt-provisioning registered Jan 17 00:21:38.029325 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:21:38.029341 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:21:38.029357 kernel: ima: No architecture policies found Jan 17 00:21:38.029373 kernel: clk: Disabling unused clocks Jan 17 00:21:38.029388 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:21:38.029404 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:21:38.029424 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:21:38.029441 kernel: Run /init as init process Jan 17 00:21:38.029456 kernel: with arguments: Jan 17 00:21:38.029472 kernel: /init Jan 17 00:21:38.029487 kernel: with environment: Jan 17 00:21:38.029503 kernel: HOME=/ Jan 17 00:21:38.029519 kernel: TERM=linux Jan 17 00:21:38.029537 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:21:38.029559 systemd[1]: Detected virtualization amazon. Jan 17 00:21:38.029576 systemd[1]: Detected architecture x86-64. Jan 17 00:21:38.029592 systemd[1]: Running in initrd. Jan 17 00:21:38.029609 systemd[1]: No hostname configured, using default hostname. Jan 17 00:21:38.029624 systemd[1]: Hostname set to . Jan 17 00:21:38.029641 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:21:38.029658 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:21:38.029674 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:21:38.029694 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:21:38.029712 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:21:38.029729 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:21:38.029746 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:21:38.029766 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:21:38.029788 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:21:38.029805 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:21:38.029823 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:21:38.029840 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:21:38.029857 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:21:38.029873 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:21:38.029890 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:21:38.029910 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:21:38.029927 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:21:38.029944 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:21:38.029960 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:21:38.029977 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:21:38.029994 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:21:38.030011 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:21:38.030027 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:21:38.030044 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:21:38.030064 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:21:38.030081 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:21:38.030098 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:21:38.030115 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:21:38.030131 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:21:38.030210 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:21:38.030227 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:21:38.030244 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:21:38.030294 systemd-journald[179]: Collecting audit messages is disabled. Jan 17 00:21:38.030330 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:21:38.030347 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 00:21:38.030364 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:21:38.030386 systemd-journald[179]: Journal started Jan 17 00:21:38.030420 systemd-journald[179]: Runtime Journal (/run/log/journal/ec22dfb5aa6e551a7fa6051f016a67ea) is 4.7M, max 38.2M, 33.4M free. Jan 17 00:21:37.999413 systemd-modules-load[180]: Inserted module 'overlay' Jan 17 00:21:38.047510 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:21:38.051175 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:21:38.052395 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:21:38.062410 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:21:38.071801 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:21:38.071838 kernel: Bridge firewalling registered Jan 17 00:21:38.070902 systemd-modules-load[180]: Inserted module 'br_netfilter' Jan 17 00:21:38.078402 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:21:38.081046 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:21:38.082133 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:21:38.088338 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:21:38.095436 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:21:38.099359 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:21:38.110836 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:21:38.119419 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:21:38.128926 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:21:38.131574 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:21:38.142371 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:21:38.146249 dracut-cmdline[211]: dracut-dracut-053 Jan 17 00:21:38.148597 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:21:38.190816 systemd-resolved[217]: Positive Trust Anchors: Jan 17 00:21:38.191798 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:21:38.191861 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:21:38.200722 systemd-resolved[217]: Defaulting to hostname 'linux'. Jan 17 00:21:38.203735 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:21:38.205261 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:21:38.242185 kernel: SCSI subsystem initialized Jan 17 00:21:38.252170 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:21:38.265177 kernel: iscsi: registered transport (tcp) Jan 17 00:21:38.287335 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:21:38.287422 kernel: QLogic iSCSI HBA Driver Jan 17 00:21:38.330266 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:21:38.335401 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:21:38.371598 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:21:38.371680 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:21:38.371701 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:21:38.417194 kernel: raid6: avx512x4 gen() 17371 MB/s Jan 17 00:21:38.435178 kernel: raid6: avx512x2 gen() 17466 MB/s Jan 17 00:21:38.453184 kernel: raid6: avx512x1 gen() 17520 MB/s Jan 17 00:21:38.471176 kernel: raid6: avx2x4 gen() 17712 MB/s Jan 17 00:21:38.489187 kernel: raid6: avx2x2 gen() 17482 MB/s Jan 17 00:21:38.508500 kernel: raid6: avx2x1 gen() 9955 MB/s Jan 17 00:21:38.508588 kernel: raid6: using algorithm avx2x4 gen() 17712 MB/s Jan 17 00:21:38.534606 kernel: raid6: .... xor() 6684 MB/s, rmw enabled Jan 17 00:21:38.534782 kernel: raid6: using avx512x2 recovery algorithm Jan 17 00:21:38.557172 kernel: xor: automatically using best checksumming function avx Jan 17 00:21:38.783177 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:21:38.794513 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:21:38.799414 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:21:38.827736 systemd-udevd[398]: Using default interface naming scheme 'v255'. Jan 17 00:21:38.833366 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:21:38.840360 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:21:38.864060 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Jan 17 00:21:38.905923 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:21:38.910487 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:21:38.963969 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:21:38.973553 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:21:38.999588 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:21:39.002643 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:21:39.004367 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:21:39.005498 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:21:39.013399 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:21:39.042875 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:21:39.068450 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 17 00:21:39.068741 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 17 00:21:39.073173 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 17 00:21:39.073444 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:21:39.099815 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:21:39.100822 kernel: AES CTR mode by8 optimization enabled Jan 17 00:21:39.105606 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:21:39.106737 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:21:39.117401 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:53:90:bf:b5:6f Jan 17 00:21:39.115950 (udev-worker)[453]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:21:39.117055 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:21:39.120218 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:21:39.120439 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:21:39.123540 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:21:39.134528 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:21:39.153840 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 17 00:21:39.154132 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 17 00:21:39.155942 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:21:39.159086 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:21:39.174694 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 17 00:21:39.179262 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:21:39.188980 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:21:39.189052 kernel: GPT:9289727 != 33554431 Jan 17 00:21:39.189073 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:21:39.189099 kernel: GPT:9289727 != 33554431 Jan 17 00:21:39.189117 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:21:39.189135 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:21:39.206987 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:21:39.215398 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:21:39.233401 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:21:39.270174 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/nvme0n1p3 scanned by (udev-worker) (457) Jan 17 00:21:39.302232 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (443) Jan 17 00:21:39.328096 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 17 00:21:39.369655 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 17 00:21:39.370377 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 17 00:21:39.388338 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 17 00:21:39.395355 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 00:21:39.402346 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:21:39.409753 disk-uuid[629]: Primary Header is updated. Jan 17 00:21:39.409753 disk-uuid[629]: Secondary Entries is updated. Jan 17 00:21:39.409753 disk-uuid[629]: Secondary Header is updated. Jan 17 00:21:39.416183 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:21:39.424180 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:21:39.433190 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:21:40.433218 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:21:40.433953 disk-uuid[630]: The operation has completed successfully. Jan 17 00:21:40.573476 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:21:40.573619 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:21:40.578382 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:21:40.594659 sh[974]: Success Jan 17 00:21:40.617356 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 00:21:40.739234 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:21:40.747461 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:21:40.751671 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:21:40.808187 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:21:40.808292 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:21:40.813792 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:21:40.813871 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:21:40.818253 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:21:40.847193 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 00:21:40.863526 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:21:40.864971 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:21:40.871424 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:21:40.875194 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:21:40.921278 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:21:40.921362 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:21:40.921384 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 00:21:40.941213 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 00:21:40.959173 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:21:40.959055 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:21:40.967674 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:21:40.975482 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:21:41.024373 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:21:41.031654 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:21:41.054658 systemd-networkd[1166]: lo: Link UP Jan 17 00:21:41.054676 systemd-networkd[1166]: lo: Gained carrier Jan 17 00:21:41.058576 systemd-networkd[1166]: Enumeration completed Jan 17 00:21:41.059061 systemd-networkd[1166]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:21:41.059067 systemd-networkd[1166]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:21:41.059312 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:21:41.064391 systemd[1]: Reached target network.target - Network. Jan 17 00:21:41.065852 systemd-networkd[1166]: eth0: Link UP Jan 17 00:21:41.065858 systemd-networkd[1166]: eth0: Gained carrier Jan 17 00:21:41.065873 systemd-networkd[1166]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:21:41.080250 systemd-networkd[1166]: eth0: DHCPv4 address 172.31.29.247/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 00:21:41.274395 ignition[1101]: Ignition 2.19.0 Jan 17 00:21:41.274413 ignition[1101]: Stage: fetch-offline Jan 17 00:21:41.274669 ignition[1101]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:21:41.274682 ignition[1101]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:21:41.275321 ignition[1101]: Ignition finished successfully Jan 17 00:21:41.277021 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:21:41.289491 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:21:41.305888 ignition[1176]: Ignition 2.19.0 Jan 17 00:21:41.305908 ignition[1176]: Stage: fetch Jan 17 00:21:41.306410 ignition[1176]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:21:41.306423 ignition[1176]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:21:41.306541 ignition[1176]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:21:41.348022 ignition[1176]: PUT result: OK Jan 17 00:21:41.354306 ignition[1176]: parsed url from cmdline: "" Jan 17 00:21:41.354317 ignition[1176]: no config URL provided Jan 17 00:21:41.354326 ignition[1176]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:21:41.354344 ignition[1176]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:21:41.354366 ignition[1176]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:21:41.358530 ignition[1176]: PUT result: OK Jan 17 00:21:41.358609 ignition[1176]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 17 00:21:41.359693 ignition[1176]: GET result: OK Jan 17 00:21:41.359818 ignition[1176]: parsing config with SHA512: c0c02880f3be7b860a2b6f2330edc6a02086880a0aef75da75923c2205194e832e7c6a4905db3f097fc84dad9083853566988b32406577ae6815ff07a560afed Jan 17 00:21:41.367727 unknown[1176]: fetched base config from "system" Jan 17 00:21:41.367747 unknown[1176]: fetched base config from "system" Jan 17 00:21:41.368433 ignition[1176]: fetch: fetch complete Jan 17 00:21:41.367754 unknown[1176]: fetched user config from "aws" Jan 17 00:21:41.368441 ignition[1176]: fetch: fetch passed Jan 17 00:21:41.368517 ignition[1176]: Ignition finished successfully Jan 17 00:21:41.370926 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:21:41.376441 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:21:41.394893 ignition[1183]: Ignition 2.19.0 Jan 17 00:21:41.394912 ignition[1183]: Stage: kargs Jan 17 00:21:41.395410 ignition[1183]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:21:41.395424 ignition[1183]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:21:41.395567 ignition[1183]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:21:41.396575 ignition[1183]: PUT result: OK Jan 17 00:21:41.399371 ignition[1183]: kargs: kargs passed Jan 17 00:21:41.399453 ignition[1183]: Ignition finished successfully Jan 17 00:21:41.401775 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:21:41.407427 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:21:41.424720 ignition[1189]: Ignition 2.19.0 Jan 17 00:21:41.424735 ignition[1189]: Stage: disks Jan 17 00:21:41.425279 ignition[1189]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:21:41.425297 ignition[1189]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:21:41.425423 ignition[1189]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:21:41.426324 ignition[1189]: PUT result: OK Jan 17 00:21:41.429816 ignition[1189]: disks: disks passed Jan 17 00:21:41.429911 ignition[1189]: Ignition finished successfully Jan 17 00:21:41.431783 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:21:41.432546 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:21:41.433020 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:21:41.433635 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:21:41.434250 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:21:41.434836 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:21:41.440370 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:21:41.477363 systemd-fsck[1197]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 00:21:41.480291 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:21:41.487341 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:21:41.590177 kernel: EXT4-fs (nvme0n1p9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:21:41.590764 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:21:41.592005 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:21:41.606325 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:21:41.609567 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:21:41.610736 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 00:21:41.610806 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:21:41.610840 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:21:41.625405 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:21:41.629286 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1216) Jan 17 00:21:41.629321 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:21:41.629335 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:21:41.629347 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 00:21:41.642393 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:21:41.648163 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 00:21:41.650932 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:21:41.868355 initrd-setup-root[1240]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:21:41.875539 initrd-setup-root[1247]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:21:41.881627 initrd-setup-root[1254]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:21:41.886879 initrd-setup-root[1261]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:21:42.033808 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:21:42.038314 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:21:42.041002 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:21:42.055362 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:21:42.057404 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:21:42.089807 ignition[1328]: INFO : Ignition 2.19.0 Jan 17 00:21:42.089807 ignition[1328]: INFO : Stage: mount Jan 17 00:21:42.091944 ignition[1328]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:21:42.091944 ignition[1328]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:21:42.091944 ignition[1328]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:21:42.091944 ignition[1328]: INFO : PUT result: OK Jan 17 00:21:42.094904 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:21:42.098915 ignition[1328]: INFO : mount: mount passed Jan 17 00:21:42.099557 ignition[1328]: INFO : Ignition finished successfully Jan 17 00:21:42.101357 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:21:42.107334 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:21:42.116286 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:21:42.138165 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1342) Jan 17 00:21:42.143172 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:21:42.143250 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:21:42.143264 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 00:21:42.150176 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 00:21:42.152366 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:21:42.187991 ignition[1358]: INFO : Ignition 2.19.0 Jan 17 00:21:42.187991 ignition[1358]: INFO : Stage: files Jan 17 00:21:42.189742 ignition[1358]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:21:42.189742 ignition[1358]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:21:42.189742 ignition[1358]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:21:42.189742 ignition[1358]: INFO : PUT result: OK Jan 17 00:21:42.192442 ignition[1358]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:21:42.193735 ignition[1358]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:21:42.193735 ignition[1358]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:21:42.210473 ignition[1358]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:21:42.211647 ignition[1358]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:21:42.211647 ignition[1358]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:21:42.211086 unknown[1358]: wrote ssh authorized keys file for user: core Jan 17 00:21:42.211305 systemd-networkd[1166]: eth0: Gained IPv6LL Jan 17 00:21:42.215999 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 00:21:42.215999 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 00:21:42.215999 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 17 00:21:42.215999 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 17 00:21:42.269059 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 00:21:42.516321 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 17 00:21:42.516321 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:21:42.519705 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:21:42.519705 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:21:42.519705 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:21:42.519705 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:21:42.519705 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:21:42.519705 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:21:42.519705 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:21:42.519705 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:21:42.519705 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:21:42.519705 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:21:42.519705 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:21:42.519705 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:21:42.519705 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 17 00:21:42.928338 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 00:21:43.450570 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:21:43.450570 ignition[1358]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 17 00:21:43.454957 ignition[1358]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 00:21:43.454957 ignition[1358]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 00:21:43.454957 ignition[1358]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 17 00:21:43.454957 ignition[1358]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 17 00:21:43.454957 ignition[1358]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:21:43.454957 ignition[1358]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:21:43.454957 ignition[1358]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 17 00:21:43.454957 ignition[1358]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:21:43.454957 ignition[1358]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:21:43.454957 ignition[1358]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:21:43.454957 ignition[1358]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:21:43.454957 ignition[1358]: INFO : files: files passed Jan 17 00:21:43.454957 ignition[1358]: INFO : Ignition finished successfully Jan 17 00:21:43.455345 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:21:43.462435 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:21:43.467641 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:21:43.474708 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:21:43.474831 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:21:43.487738 initrd-setup-root-after-ignition[1388]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:21:43.487738 initrd-setup-root-after-ignition[1388]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:21:43.491182 initrd-setup-root-after-ignition[1392]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:21:43.494014 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:21:43.494919 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:21:43.500358 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:21:43.535911 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:21:43.536060 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:21:43.537520 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:21:43.538665 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:21:43.539516 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:21:43.545492 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:21:43.560046 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:21:43.566404 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:21:43.580812 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:21:43.581622 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:21:43.582652 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:21:43.583601 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:21:43.583784 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:21:43.585228 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:21:43.586129 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:21:43.586956 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:21:43.587781 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:21:43.588764 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:21:43.589555 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:21:43.590347 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:21:43.591185 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:21:43.592465 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:21:43.593379 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:21:43.594108 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:21:43.594312 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:21:43.595410 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:21:43.596231 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:21:43.597041 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:21:43.597201 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:21:43.597843 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:21:43.598017 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:21:43.599468 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:21:43.599648 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:21:43.600376 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:21:43.600525 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:21:43.607506 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:21:43.608822 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:21:43.609126 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:21:43.613204 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:21:43.614007 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:21:43.614313 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:21:43.617424 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:21:43.617658 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:21:43.631385 ignition[1412]: INFO : Ignition 2.19.0 Jan 17 00:21:43.631385 ignition[1412]: INFO : Stage: umount Jan 17 00:21:43.637643 ignition[1412]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:21:43.637643 ignition[1412]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:21:43.637643 ignition[1412]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:21:43.637643 ignition[1412]: INFO : PUT result: OK Jan 17 00:21:43.633490 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:21:43.633743 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:21:43.644180 ignition[1412]: INFO : umount: umount passed Jan 17 00:21:43.644982 ignition[1412]: INFO : Ignition finished successfully Jan 17 00:21:43.646579 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:21:43.646735 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:21:43.647494 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:21:43.647571 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:21:43.650820 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:21:43.650901 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:21:43.651774 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:21:43.651843 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:21:43.652754 systemd[1]: Stopped target network.target - Network. Jan 17 00:21:43.654263 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:21:43.654342 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:21:43.654885 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:21:43.655370 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:21:43.660261 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:21:43.661401 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:21:43.662375 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:21:43.663680 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:21:43.663750 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:21:43.664368 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:21:43.664428 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:21:43.665447 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:21:43.665504 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:21:43.666250 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:21:43.666299 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:21:43.666762 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:21:43.667138 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:21:43.669245 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:21:43.670042 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:21:43.670206 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:21:43.670326 systemd-networkd[1166]: eth0: DHCPv6 lease lost Jan 17 00:21:43.672539 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:21:43.672827 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:21:43.676257 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:21:43.676348 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:21:43.677343 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:21:43.677418 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:21:43.684293 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:21:43.684964 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:21:43.685048 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:21:43.688112 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:21:43.689233 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:21:43.689391 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:21:43.699428 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:21:43.699559 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:21:43.700902 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:21:43.700973 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:21:43.702969 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:21:43.703039 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:21:43.707660 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:21:43.707868 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:21:43.710043 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:21:43.710352 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:21:43.713195 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:21:43.713275 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:21:43.714076 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:21:43.714128 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:21:43.714872 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:21:43.714937 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:21:43.716111 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:21:43.716220 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:21:43.717560 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:21:43.717625 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:21:43.727094 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:21:43.727891 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:21:43.727993 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:21:43.729336 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:21:43.729405 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:21:43.737689 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:21:43.737827 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:21:43.739839 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:21:43.745410 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:21:43.775660 systemd[1]: Switching root. Jan 17 00:21:43.801796 systemd-journald[179]: Journal stopped Jan 17 00:21:45.268958 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Jan 17 00:21:45.269063 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:21:45.269092 kernel: SELinux: policy capability open_perms=1 Jan 17 00:21:45.269118 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:21:45.269152 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:21:45.269173 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:21:45.269192 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:21:45.269211 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:21:45.269236 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:21:45.269258 kernel: audit: type=1403 audit(1768609304.136:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:21:45.269285 systemd[1]: Successfully loaded SELinux policy in 41.569ms. Jan 17 00:21:45.269315 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.253ms. Jan 17 00:21:45.269341 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:21:45.269365 systemd[1]: Detected virtualization amazon. Jan 17 00:21:45.269396 systemd[1]: Detected architecture x86-64. Jan 17 00:21:45.269416 systemd[1]: Detected first boot. Jan 17 00:21:45.269440 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:21:45.269462 zram_generator::config[1473]: No configuration found. Jan 17 00:21:45.269491 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:21:45.269516 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:21:45.269541 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 17 00:21:45.269568 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:21:45.269592 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:21:45.269614 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:21:45.269637 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:21:45.269661 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:21:45.269685 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:21:45.269713 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:21:45.269737 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:21:45.269762 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:21:45.269784 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:21:45.269809 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:21:45.269831 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:21:45.269856 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:21:45.269881 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:21:45.269909 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:21:45.269932 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:21:45.269955 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:21:45.269977 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:21:45.270007 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:21:45.270031 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:21:45.270056 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:21:45.270080 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:21:45.270108 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:21:45.270131 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:21:45.274115 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:21:45.274168 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:21:45.274188 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:21:45.276493 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:21:45.276522 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:21:45.276639 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:21:45.276668 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:21:45.276692 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:21:45.276726 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:21:45.276751 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:21:45.276774 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:21:45.276799 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:21:45.276822 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:21:45.276846 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:21:45.276871 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:21:45.276894 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:21:45.276923 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:21:45.276946 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:21:45.276973 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:21:45.276997 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:21:45.277020 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:21:45.277044 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:21:45.277069 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 17 00:21:45.277096 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 17 00:21:45.277122 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:21:45.277160 kernel: loop: module loaded Jan 17 00:21:45.277181 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:21:45.277859 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:21:45.277891 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:21:45.277913 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:21:45.277936 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:21:45.277957 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:21:45.277980 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:21:45.278008 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:21:45.278029 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:21:45.278051 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:21:45.278072 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:21:45.278094 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:21:45.278115 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:21:45.279161 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:21:45.279212 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:21:45.279232 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:21:45.279256 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:21:45.279277 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:21:45.279297 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:21:45.279319 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:21:45.279340 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:21:45.279365 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:21:45.279422 systemd-journald[1577]: Collecting audit messages is disabled. Jan 17 00:21:45.279463 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:21:45.279485 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:21:45.279507 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:21:45.279530 systemd-journald[1577]: Journal started Jan 17 00:21:45.279574 systemd-journald[1577]: Runtime Journal (/run/log/journal/ec22dfb5aa6e551a7fa6051f016a67ea) is 4.7M, max 38.2M, 33.4M free. Jan 17 00:21:45.287259 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:21:45.294168 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:21:45.297452 kernel: ACPI: bus type drm_connector registered Jan 17 00:21:45.305244 kernel: fuse: init (API version 7.39) Jan 17 00:21:45.309257 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:21:45.320167 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:21:45.337164 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:21:45.346172 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:21:45.359289 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:21:45.370167 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:21:45.380172 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:21:45.394651 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:21:45.394890 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:21:45.396202 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:21:45.398523 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:21:45.401498 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:21:45.405771 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:21:45.420156 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:21:45.426607 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:21:45.451510 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:21:45.462312 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:21:45.470559 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:21:45.478867 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:21:45.479973 systemd-tmpfiles[1604]: ACLs are not supported, ignoring. Jan 17 00:21:45.479995 systemd-tmpfiles[1604]: ACLs are not supported, ignoring. Jan 17 00:21:45.488839 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:21:45.505310 systemd-journald[1577]: Time spent on flushing to /var/log/journal/ec22dfb5aa6e551a7fa6051f016a67ea is 43.240ms for 977 entries. Jan 17 00:21:45.505310 systemd-journald[1577]: System Journal (/var/log/journal/ec22dfb5aa6e551a7fa6051f016a67ea) is 8.0M, max 195.6M, 187.6M free. Jan 17 00:21:45.564634 systemd-journald[1577]: Received client request to flush runtime journal. Jan 17 00:21:45.513074 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:21:45.527477 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:21:45.553406 udevadm[1635]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 00:21:45.570743 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:21:45.599848 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:21:45.606420 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:21:45.633335 systemd-tmpfiles[1648]: ACLs are not supported, ignoring. Jan 17 00:21:45.633774 systemd-tmpfiles[1648]: ACLs are not supported, ignoring. Jan 17 00:21:45.641957 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:21:46.096778 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:21:46.105366 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:21:46.131353 systemd-udevd[1654]: Using default interface naming scheme 'v255'. Jan 17 00:21:46.167942 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:21:46.178465 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:21:46.209390 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:21:46.247604 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 17 00:21:46.290509 (udev-worker)[1656]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:21:46.293943 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:21:46.362185 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 17 00:21:46.407218 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 00:21:46.417314 kernel: ACPI: button: Power Button [PWRF] Jan 17 00:21:46.419169 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Jan 17 00:21:46.430238 kernel: ACPI: button: Sleep Button [SLPF] Jan 17 00:21:46.430336 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Jan 17 00:21:46.447168 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1662) Jan 17 00:21:46.461290 systemd-networkd[1659]: lo: Link UP Jan 17 00:21:46.461300 systemd-networkd[1659]: lo: Gained carrier Jan 17 00:21:46.464095 systemd-networkd[1659]: Enumeration completed Jan 17 00:21:46.465082 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:21:46.467434 systemd-networkd[1659]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:21:46.467536 systemd-networkd[1659]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:21:46.475337 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:21:46.479345 systemd-networkd[1659]: eth0: Link UP Jan 17 00:21:46.479599 systemd-networkd[1659]: eth0: Gained carrier Jan 17 00:21:46.479628 systemd-networkd[1659]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:21:46.492251 systemd-networkd[1659]: eth0: DHCPv4 address 172.31.29.247/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 00:21:46.538470 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:21:46.544863 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:21:46.596437 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:21:46.596841 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:21:46.608498 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:21:46.693026 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:21:46.716171 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 00:21:46.717316 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:21:46.723401 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:21:46.741458 lvm[1783]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:21:46.765370 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:21:46.766236 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:21:46.772610 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:21:46.790927 lvm[1786]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:21:46.814840 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:21:46.815978 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:21:46.817022 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:21:46.817215 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:21:46.818091 systemd[1]: Reached target machines.target - Containers. Jan 17 00:21:46.820273 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:21:46.825397 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:21:46.828330 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:21:46.829785 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:21:46.837366 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:21:46.847171 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:21:46.854371 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:21:46.857523 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:21:46.880116 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:21:46.896368 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:21:46.897555 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:21:46.918026 kernel: loop0: detected capacity change from 0 to 142488 Jan 17 00:21:46.991434 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:21:47.013365 kernel: loop1: detected capacity change from 0 to 140768 Jan 17 00:21:47.072217 kernel: loop2: detected capacity change from 0 to 224512 Jan 17 00:21:47.286215 kernel: loop3: detected capacity change from 0 to 61336 Jan 17 00:21:47.409273 kernel: loop4: detected capacity change from 0 to 142488 Jan 17 00:21:47.443354 kernel: loop5: detected capacity change from 0 to 140768 Jan 17 00:21:47.479266 kernel: loop6: detected capacity change from 0 to 224512 Jan 17 00:21:47.522185 kernel: loop7: detected capacity change from 0 to 61336 Jan 17 00:21:47.536879 (sd-merge)[1809]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 17 00:21:47.538000 (sd-merge)[1809]: Merged extensions into '/usr'. Jan 17 00:21:47.544420 systemd[1]: Reloading requested from client PID 1794 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:21:47.544444 systemd[1]: Reloading... Jan 17 00:21:47.568623 ldconfig[1790]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:21:47.618187 zram_generator::config[1835]: No configuration found. Jan 17 00:21:47.766438 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:21:47.838404 systemd[1]: Reloading finished in 293 ms. Jan 17 00:21:47.859787 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:21:47.861001 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:21:47.873383 systemd[1]: Starting ensure-sysext.service... Jan 17 00:21:47.879639 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:21:47.902793 systemd[1]: Reloading requested from client PID 1896 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:21:47.902816 systemd[1]: Reloading... Jan 17 00:21:47.907324 systemd-networkd[1659]: eth0: Gained IPv6LL Jan 17 00:21:47.919939 systemd-tmpfiles[1897]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:21:47.921663 systemd-tmpfiles[1897]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:21:47.924795 systemd-tmpfiles[1897]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:21:47.925193 systemd-tmpfiles[1897]: ACLs are not supported, ignoring. Jan 17 00:21:47.925286 systemd-tmpfiles[1897]: ACLs are not supported, ignoring. Jan 17 00:21:47.929462 systemd-tmpfiles[1897]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:21:47.929481 systemd-tmpfiles[1897]: Skipping /boot Jan 17 00:21:47.943664 systemd-tmpfiles[1897]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:21:47.943686 systemd-tmpfiles[1897]: Skipping /boot Jan 17 00:21:48.021205 zram_generator::config[1928]: No configuration found. Jan 17 00:21:48.157043 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:21:48.234005 systemd[1]: Reloading finished in 330 ms. Jan 17 00:21:48.248220 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:21:48.254001 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:21:48.265464 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:21:48.271369 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:21:48.274496 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:21:48.291517 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:21:48.298357 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:21:48.311127 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:21:48.312323 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:21:48.316267 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:21:48.325205 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:21:48.341528 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:21:48.342397 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:21:48.342585 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:21:48.346241 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:21:48.346483 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:21:48.363963 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:21:48.364225 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:21:48.366136 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:21:48.368418 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:21:48.383013 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:21:48.398884 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:21:48.399319 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:21:48.405915 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:21:48.416473 augenrules[2022]: No rules Jan 17 00:21:48.417692 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:21:48.431516 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:21:48.438498 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:21:48.441473 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:21:48.441782 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:21:48.446333 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:21:48.448120 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:21:48.453455 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:21:48.461513 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:21:48.466037 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:21:48.470114 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:21:48.470380 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:21:48.471744 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:21:48.471993 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:21:48.475869 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:21:48.476820 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:21:48.485652 systemd[1]: Finished ensure-sysext.service. Jan 17 00:21:48.500910 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:21:48.501005 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:21:48.513540 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:21:48.530800 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:21:48.539219 systemd-resolved[1998]: Positive Trust Anchors: Jan 17 00:21:48.539235 systemd-resolved[1998]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:21:48.539284 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:21:48.539287 systemd-resolved[1998]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:21:48.541685 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:21:48.545758 systemd-resolved[1998]: Defaulting to hostname 'linux'. Jan 17 00:21:48.548415 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:21:48.549278 systemd[1]: Reached target network.target - Network. Jan 17 00:21:48.549740 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:21:48.550199 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:21:48.550586 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:21:48.551072 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:21:48.551503 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:21:48.552052 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:21:48.552601 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:21:48.552969 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:21:48.553383 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:21:48.553433 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:21:48.553792 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:21:48.554802 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:21:48.556797 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:21:48.559004 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:21:48.561387 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:21:48.562108 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:21:48.562624 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:21:48.563314 systemd[1]: System is tainted: cgroupsv1 Jan 17 00:21:48.563369 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:21:48.563403 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:21:48.567279 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:21:48.570482 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 00:21:48.583569 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:21:48.587649 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:21:48.595178 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:21:48.600916 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:21:48.617916 jq[2056]: false Jan 17 00:21:48.626506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:21:48.640403 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:21:48.649495 extend-filesystems[2057]: Found loop4 Jan 17 00:21:48.655258 extend-filesystems[2057]: Found loop5 Jan 17 00:21:48.655258 extend-filesystems[2057]: Found loop6 Jan 17 00:21:48.655258 extend-filesystems[2057]: Found loop7 Jan 17 00:21:48.655258 extend-filesystems[2057]: Found nvme0n1 Jan 17 00:21:48.655258 extend-filesystems[2057]: Found nvme0n1p1 Jan 17 00:21:48.655258 extend-filesystems[2057]: Found nvme0n1p2 Jan 17 00:21:48.655258 extend-filesystems[2057]: Found nvme0n1p3 Jan 17 00:21:48.655258 extend-filesystems[2057]: Found usr Jan 17 00:21:48.655258 extend-filesystems[2057]: Found nvme0n1p4 Jan 17 00:21:48.655258 extend-filesystems[2057]: Found nvme0n1p6 Jan 17 00:21:48.655258 extend-filesystems[2057]: Found nvme0n1p7 Jan 17 00:21:48.655258 extend-filesystems[2057]: Found nvme0n1p9 Jan 17 00:21:48.655258 extend-filesystems[2057]: Checking size of /dev/nvme0n1p9 Jan 17 00:21:48.664697 systemd[1]: Started ntpd.service - Network Time Service. Jan 17 00:21:48.672244 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:21:48.682277 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:21:48.696283 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 17 00:21:48.697969 dbus-daemon[2055]: [system] SELinux support is enabled Jan 17 00:21:48.704348 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:21:48.719499 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:21:48.726276 extend-filesystems[2057]: Resized partition /dev/nvme0n1p9 Jan 17 00:21:48.728971 dbus-daemon[2055]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1659 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 17 00:21:48.738249 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:21:48.740609 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:21:48.751397 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:21:48.761310 extend-filesystems[2079]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:21:48.766365 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:21:48.767822 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:21:48.771076 ntpd[2066]: ntpd 4.2.8p17@1.4004-o Fri Jan 16 21:54:12 UTC 2026 (1): Starting Jan 17 00:21:48.796837 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 17 00:21:48.796881 ntpd[2066]: 17 Jan 00:21:48 ntpd[2066]: ntpd 4.2.8p17@1.4004-o Fri Jan 16 21:54:12 UTC 2026 (1): Starting Jan 17 00:21:48.796881 ntpd[2066]: 17 Jan 00:21:48 ntpd[2066]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 00:21:48.796881 ntpd[2066]: 17 Jan 00:21:48 ntpd[2066]: ---------------------------------------------------- Jan 17 00:21:48.796881 ntpd[2066]: 17 Jan 00:21:48 ntpd[2066]: ntp-4 is maintained by Network Time Foundation, Jan 17 00:21:48.796881 ntpd[2066]: 17 Jan 00:21:48 ntpd[2066]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 00:21:48.796881 ntpd[2066]: 17 Jan 00:21:48 ntpd[2066]: corporation. Support and training for ntp-4 are Jan 17 00:21:48.796881 ntpd[2066]: 17 Jan 00:21:48 ntpd[2066]: available at https://www.nwtime.org/support Jan 17 00:21:48.796881 ntpd[2066]: 17 Jan 00:21:48 ntpd[2066]: ---------------------------------------------------- Jan 17 00:21:48.796881 ntpd[2066]: 17 Jan 00:21:48 ntpd[2066]: proto: precision = 0.057 usec (-24) Jan 17 00:21:48.796881 ntpd[2066]: 17 Jan 00:21:48 ntpd[2066]: basedate set to 2026-01-04 Jan 17 00:21:48.796881 ntpd[2066]: 17 Jan 00:21:48 ntpd[2066]: gps base set to 2026-01-04 (week 2400) Jan 17 00:21:48.796881 ntpd[2066]: 17 Jan 00:21:48 ntpd[2066]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 00:21:48.771114 ntpd[2066]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 00:21:48.771125 ntpd[2066]: ---------------------------------------------------- Jan 17 00:21:48.771136 ntpd[2066]: ntp-4 is maintained by Network Time Foundation, Jan 17 00:21:48.771674 ntpd[2066]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 00:21:48.771688 ntpd[2066]: corporation. Support and training for ntp-4 are Jan 17 00:21:48.771698 ntpd[2066]: available at https://www.nwtime.org/support Jan 17 00:21:48.810629 ntpd[2066]: 17 Jan 00:21:48 ntpd[2066]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 00:21:48.810629 ntpd[2066]: 17 Jan 00:21:48 ntpd[2066]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 00:21:48.810629 ntpd[2066]: 17 Jan 00:21:48 ntpd[2066]: Listen normally on 3 eth0 172.31.29.247:123 Jan 17 00:21:48.810629 ntpd[2066]: 17 Jan 00:21:48 ntpd[2066]: Listen normally on 4 lo [::1]:123 Jan 17 00:21:48.810629 ntpd[2066]: 17 Jan 00:21:48 ntpd[2066]: Listen normally on 5 eth0 [fe80::453:90ff:febf:b56f%2]:123 Jan 17 00:21:48.810629 ntpd[2066]: 17 Jan 00:21:48 ntpd[2066]: Listening on routing socket on fd #22 for interface updates Jan 17 00:21:48.798740 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:21:48.771708 ntpd[2066]: ---------------------------------------------------- Jan 17 00:21:48.799082 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:21:48.778690 ntpd[2066]: proto: precision = 0.057 usec (-24) Jan 17 00:21:48.782409 ntpd[2066]: basedate set to 2026-01-04 Jan 17 00:21:48.782431 ntpd[2066]: gps base set to 2026-01-04 (week 2400) Jan 17 00:21:48.789670 ntpd[2066]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 00:21:48.798172 ntpd[2066]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 00:21:48.798403 ntpd[2066]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 00:21:48.798450 ntpd[2066]: Listen normally on 3 eth0 172.31.29.247:123 Jan 17 00:21:48.798495 ntpd[2066]: Listen normally on 4 lo [::1]:123 Jan 17 00:21:48.798550 ntpd[2066]: Listen normally on 5 eth0 [fe80::453:90ff:febf:b56f%2]:123 Jan 17 00:21:48.798593 ntpd[2066]: Listening on routing socket on fd #22 for interface updates Jan 17 00:21:48.821007 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:21:48.823529 jq[2085]: true Jan 17 00:21:48.825619 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:21:48.829504 ntpd[2066]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:21:48.829550 ntpd[2066]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:21:48.830102 ntpd[2066]: 17 Jan 00:21:48 ntpd[2066]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:21:48.830102 ntpd[2066]: 17 Jan 00:21:48 ntpd[2066]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:21:48.852561 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:21:48.853380 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:21:48.885021 update_engine[2082]: I20260117 00:21:48.884891 2082 main.cc:92] Flatcar Update Engine starting Jan 17 00:21:48.888133 update_engine[2082]: I20260117 00:21:48.887923 2082 update_check_scheduler.cc:74] Next update check in 11m17s Jan 17 00:21:48.907417 (ntainerd)[2105]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:21:48.931877 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:21:48.934565 dbus-daemon[2055]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 00:21:48.940590 jq[2103]: true Jan 17 00:21:48.954687 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:21:48.976195 coreos-metadata[2053]: Jan 17 00:21:48.970 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 00:21:48.976074 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:21:48.985827 tar[2096]: linux-amd64/LICENSE Jan 17 00:21:48.985827 tar[2096]: linux-amd64/helm Jan 17 00:21:48.976120 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:21:49.006362 coreos-metadata[2053]: Jan 17 00:21:48.984 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 17 00:21:49.006362 coreos-metadata[2053]: Jan 17 00:21:48.996 INFO Fetch successful Jan 17 00:21:49.006362 coreos-metadata[2053]: Jan 17 00:21:49.001 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 17 00:21:48.986265 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 17 00:21:49.015376 coreos-metadata[2053]: Jan 17 00:21:49.009 INFO Fetch successful Jan 17 00:21:49.015376 coreos-metadata[2053]: Jan 17 00:21:49.009 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 17 00:21:49.015376 coreos-metadata[2053]: Jan 17 00:21:49.014 INFO Fetch successful Jan 17 00:21:49.015376 coreos-metadata[2053]: Jan 17 00:21:49.015 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 17 00:21:48.988446 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:21:48.988522 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:21:48.989895 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:21:49.005380 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:21:49.014295 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 17 00:21:49.031874 coreos-metadata[2053]: Jan 17 00:21:49.025 INFO Fetch successful Jan 17 00:21:49.031874 coreos-metadata[2053]: Jan 17 00:21:49.025 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 17 00:21:49.031874 coreos-metadata[2053]: Jan 17 00:21:49.027 INFO Fetch failed with 404: resource not found Jan 17 00:21:49.031874 coreos-metadata[2053]: Jan 17 00:21:49.027 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 17 00:21:49.031874 coreos-metadata[2053]: Jan 17 00:21:49.029 INFO Fetch successful Jan 17 00:21:49.031874 coreos-metadata[2053]: Jan 17 00:21:49.031 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 17 00:21:49.036123 coreos-metadata[2053]: Jan 17 00:21:49.033 INFO Fetch successful Jan 17 00:21:49.036123 coreos-metadata[2053]: Jan 17 00:21:49.033 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 17 00:21:49.036123 coreos-metadata[2053]: Jan 17 00:21:49.035 INFO Fetch successful Jan 17 00:21:49.036123 coreos-metadata[2053]: Jan 17 00:21:49.035 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 17 00:21:49.044873 coreos-metadata[2053]: Jan 17 00:21:49.040 INFO Fetch successful Jan 17 00:21:49.050852 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 17 00:21:49.096360 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1656) Jan 17 00:21:49.096460 coreos-metadata[2053]: Jan 17 00:21:49.045 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 17 00:21:49.096460 coreos-metadata[2053]: Jan 17 00:21:49.052 INFO Fetch successful Jan 17 00:21:49.099960 systemd-logind[2080]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 00:21:49.099990 systemd-logind[2080]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 17 00:21:49.100014 systemd-logind[2080]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:21:49.102996 systemd-logind[2080]: New seat seat0. Jan 17 00:21:49.107316 extend-filesystems[2079]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 17 00:21:49.107316 extend-filesystems[2079]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 17 00:21:49.107316 extend-filesystems[2079]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 17 00:21:49.121760 extend-filesystems[2057]: Resized filesystem in /dev/nvme0n1p9 Jan 17 00:21:49.130265 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 17 00:21:49.137654 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:21:49.157455 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:21:49.158848 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:21:49.275813 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 00:21:49.278525 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:21:49.304686 bash[2176]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:21:49.307840 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:21:49.330522 systemd[1]: Starting sshkeys.service... Jan 17 00:21:49.359450 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 00:21:49.367803 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 00:21:49.423516 coreos-metadata[2220]: Jan 17 00:21:49.423 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 00:21:49.426563 coreos-metadata[2220]: Jan 17 00:21:49.426 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 17 00:21:49.427522 coreos-metadata[2220]: Jan 17 00:21:49.427 INFO Fetch successful Jan 17 00:21:49.427522 coreos-metadata[2220]: Jan 17 00:21:49.427 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 17 00:21:49.428600 coreos-metadata[2220]: Jan 17 00:21:49.428 INFO Fetch successful Jan 17 00:21:49.430433 unknown[2220]: wrote ssh authorized keys file for user: core Jan 17 00:21:49.506568 update-ssh-keys[2226]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:21:49.507572 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 00:21:49.518323 systemd[1]: Finished sshkeys.service. Jan 17 00:21:49.581231 amazon-ssm-agent[2153]: Initializing new seelog logger Jan 17 00:21:49.581231 amazon-ssm-agent[2153]: New Seelog Logger Creation Complete Jan 17 00:21:49.581231 amazon-ssm-agent[2153]: 2026/01/17 00:21:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:21:49.581231 amazon-ssm-agent[2153]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:21:49.581231 amazon-ssm-agent[2153]: 2026/01/17 00:21:49 processing appconfig overrides Jan 17 00:21:49.583603 amazon-ssm-agent[2153]: 2026/01/17 00:21:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:21:49.583603 amazon-ssm-agent[2153]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:21:49.583603 amazon-ssm-agent[2153]: 2026/01/17 00:21:49 processing appconfig overrides Jan 17 00:21:49.583603 amazon-ssm-agent[2153]: 2026/01/17 00:21:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:21:49.583603 amazon-ssm-agent[2153]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:21:49.583603 amazon-ssm-agent[2153]: 2026/01/17 00:21:49 processing appconfig overrides Jan 17 00:21:49.583603 amazon-ssm-agent[2153]: 2026-01-17 00:21:49 INFO Proxy environment variables: Jan 17 00:21:49.591809 amazon-ssm-agent[2153]: 2026/01/17 00:21:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:21:49.591809 amazon-ssm-agent[2153]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:21:49.591809 amazon-ssm-agent[2153]: 2026/01/17 00:21:49 processing appconfig overrides Jan 17 00:21:49.662452 locksmithd[2138]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:21:49.687173 amazon-ssm-agent[2153]: 2026-01-17 00:21:49 INFO https_proxy: Jan 17 00:21:49.783118 amazon-ssm-agent[2153]: 2026-01-17 00:21:49 INFO http_proxy: Jan 17 00:21:49.847426 containerd[2105]: time="2026-01-17T00:21:49.847242521Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:21:49.875739 dbus-daemon[2055]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 17 00:21:49.888587 amazon-ssm-agent[2153]: 2026-01-17 00:21:49 INFO no_proxy: Jan 17 00:21:49.879345 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 17 00:21:49.876341 dbus-daemon[2055]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2135 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 17 00:21:49.893502 systemd[1]: Starting polkit.service - Authorization Manager... Jan 17 00:21:49.939437 polkitd[2293]: Started polkitd version 121 Jan 17 00:21:49.968967 polkitd[2293]: Loading rules from directory /etc/polkit-1/rules.d Jan 17 00:21:49.972313 polkitd[2293]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 17 00:21:49.975518 polkitd[2293]: Finished loading, compiling and executing 2 rules Jan 17 00:21:49.976318 dbus-daemon[2055]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 17 00:21:49.976563 systemd[1]: Started polkit.service - Authorization Manager. Jan 17 00:21:49.978533 polkitd[2293]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 17 00:21:49.982555 amazon-ssm-agent[2153]: 2026-01-17 00:21:49 INFO Checking if agent identity type OnPrem can be assumed Jan 17 00:21:50.008365 systemd-hostnamed[2135]: Hostname set to (transient) Jan 17 00:21:50.010212 systemd-resolved[1998]: System hostname changed to 'ip-172-31-29-247'. Jan 17 00:21:50.049697 containerd[2105]: time="2026-01-17T00:21:50.049226954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:21:50.060165 containerd[2105]: time="2026-01-17T00:21:50.057698099Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:21:50.060165 containerd[2105]: time="2026-01-17T00:21:50.057754348Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:21:50.060165 containerd[2105]: time="2026-01-17T00:21:50.057780257Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:21:50.060165 containerd[2105]: time="2026-01-17T00:21:50.057988560Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:21:50.060165 containerd[2105]: time="2026-01-17T00:21:50.058012622Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:21:50.060165 containerd[2105]: time="2026-01-17T00:21:50.058090560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:21:50.060165 containerd[2105]: time="2026-01-17T00:21:50.058109820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:21:50.060165 containerd[2105]: time="2026-01-17T00:21:50.059401615Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:21:50.060165 containerd[2105]: time="2026-01-17T00:21:50.059430825Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:21:50.060165 containerd[2105]: time="2026-01-17T00:21:50.059453885Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:21:50.060165 containerd[2105]: time="2026-01-17T00:21:50.059470360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:21:50.060666 containerd[2105]: time="2026-01-17T00:21:50.059576216Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:21:50.060666 containerd[2105]: time="2026-01-17T00:21:50.059856236Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:21:50.062180 containerd[2105]: time="2026-01-17T00:21:50.061397237Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:21:50.062180 containerd[2105]: time="2026-01-17T00:21:50.061430657Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:21:50.062180 containerd[2105]: time="2026-01-17T00:21:50.061569802Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:21:50.064158 containerd[2105]: time="2026-01-17T00:21:50.063182804Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:21:50.072169 containerd[2105]: time="2026-01-17T00:21:50.068700522Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:21:50.072169 containerd[2105]: time="2026-01-17T00:21:50.068785095Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:21:50.072169 containerd[2105]: time="2026-01-17T00:21:50.068815701Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:21:50.072169 containerd[2105]: time="2026-01-17T00:21:50.068884475Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:21:50.072169 containerd[2105]: time="2026-01-17T00:21:50.068908163Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:21:50.072169 containerd[2105]: time="2026-01-17T00:21:50.069094661Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:21:50.072169 containerd[2105]: time="2026-01-17T00:21:50.071545817Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:21:50.072169 containerd[2105]: time="2026-01-17T00:21:50.071718102Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:21:50.072169 containerd[2105]: time="2026-01-17T00:21:50.071742498Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:21:50.072169 containerd[2105]: time="2026-01-17T00:21:50.071763030Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:21:50.072169 containerd[2105]: time="2026-01-17T00:21:50.071787974Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:21:50.072169 containerd[2105]: time="2026-01-17T00:21:50.071811695Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:21:50.072169 containerd[2105]: time="2026-01-17T00:21:50.071833426Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:21:50.072169 containerd[2105]: time="2026-01-17T00:21:50.071855836Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:21:50.072790 containerd[2105]: time="2026-01-17T00:21:50.071878240Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:21:50.072790 containerd[2105]: time="2026-01-17T00:21:50.071908247Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:21:50.072790 containerd[2105]: time="2026-01-17T00:21:50.071927059Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:21:50.072790 containerd[2105]: time="2026-01-17T00:21:50.071949128Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:21:50.072790 containerd[2105]: time="2026-01-17T00:21:50.071977519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:21:50.072790 containerd[2105]: time="2026-01-17T00:21:50.072009462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:21:50.072790 containerd[2105]: time="2026-01-17T00:21:50.072030740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:21:50.072790 containerd[2105]: time="2026-01-17T00:21:50.072052814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:21:50.072790 containerd[2105]: time="2026-01-17T00:21:50.072071570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:21:50.072790 containerd[2105]: time="2026-01-17T00:21:50.072092133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:21:50.072790 containerd[2105]: time="2026-01-17T00:21:50.072131463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:21:50.076167 containerd[2105]: time="2026-01-17T00:21:50.073263733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:21:50.076167 containerd[2105]: time="2026-01-17T00:21:50.073312575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:21:50.076167 containerd[2105]: time="2026-01-17T00:21:50.073339437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:21:50.076167 containerd[2105]: time="2026-01-17T00:21:50.073359351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:21:50.076167 containerd[2105]: time="2026-01-17T00:21:50.073384711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:21:50.076167 containerd[2105]: time="2026-01-17T00:21:50.073420206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:21:50.076167 containerd[2105]: time="2026-01-17T00:21:50.073447266Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:21:50.076167 containerd[2105]: time="2026-01-17T00:21:50.073481867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:21:50.076167 containerd[2105]: time="2026-01-17T00:21:50.073501017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:21:50.076167 containerd[2105]: time="2026-01-17T00:21:50.073518713Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:21:50.076167 containerd[2105]: time="2026-01-17T00:21:50.073572717Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:21:50.076167 containerd[2105]: time="2026-01-17T00:21:50.073598118Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:21:50.076167 containerd[2105]: time="2026-01-17T00:21:50.073627781Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:21:50.076759 containerd[2105]: time="2026-01-17T00:21:50.073646393Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:21:50.076759 containerd[2105]: time="2026-01-17T00:21:50.073663653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:21:50.076759 containerd[2105]: time="2026-01-17T00:21:50.073682249Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:21:50.076759 containerd[2105]: time="2026-01-17T00:21:50.073698074Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:21:50.076759 containerd[2105]: time="2026-01-17T00:21:50.073713552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:21:50.076956 containerd[2105]: time="2026-01-17T00:21:50.074110386Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:21:50.076956 containerd[2105]: time="2026-01-17T00:21:50.075575937Z" level=info msg="Connect containerd service" Jan 17 00:21:50.076956 containerd[2105]: time="2026-01-17T00:21:50.075651771Z" level=info msg="using legacy CRI server" Jan 17 00:21:50.076956 containerd[2105]: time="2026-01-17T00:21:50.075663810Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:21:50.076956 containerd[2105]: time="2026-01-17T00:21:50.075883448Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:21:50.079173 containerd[2105]: time="2026-01-17T00:21:50.078429519Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:21:50.079457 containerd[2105]: time="2026-01-17T00:21:50.079438034Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:21:50.082401 amazon-ssm-agent[2153]: 2026-01-17 00:21:49 INFO Checking if agent identity type EC2 can be assumed Jan 17 00:21:50.081312 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:21:50.082569 containerd[2105]: time="2026-01-17T00:21:50.080771398Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:21:50.082569 containerd[2105]: time="2026-01-17T00:21:50.080863346Z" level=info msg="Start subscribing containerd event" Jan 17 00:21:50.082569 containerd[2105]: time="2026-01-17T00:21:50.080917763Z" level=info msg="Start recovering state" Jan 17 00:21:50.082569 containerd[2105]: time="2026-01-17T00:21:50.081006528Z" level=info msg="Start event monitor" Jan 17 00:21:50.082569 containerd[2105]: time="2026-01-17T00:21:50.081026620Z" level=info msg="Start snapshots syncer" Jan 17 00:21:50.082569 containerd[2105]: time="2026-01-17T00:21:50.081042168Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:21:50.082569 containerd[2105]: time="2026-01-17T00:21:50.081053476Z" level=info msg="Start streaming server" Jan 17 00:21:50.084076 containerd[2105]: time="2026-01-17T00:21:50.082876091Z" level=info msg="containerd successfully booted in 0.244116s" Jan 17 00:21:50.180233 amazon-ssm-agent[2153]: 2026-01-17 00:21:49 INFO Agent will take identity from EC2 Jan 17 00:21:50.281867 amazon-ssm-agent[2153]: 2026-01-17 00:21:49 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 00:21:50.283291 sshd_keygen[2120]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:21:50.341900 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:21:50.354991 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:21:50.380939 amazon-ssm-agent[2153]: 2026-01-17 00:21:49 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 00:21:50.389137 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:21:50.389485 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:21:50.407880 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:21:50.433485 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:21:50.446643 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:21:50.456731 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:21:50.459065 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:21:50.480433 amazon-ssm-agent[2153]: 2026-01-17 00:21:49 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 00:21:50.489486 amazon-ssm-agent[2153]: 2026-01-17 00:21:49 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 17 00:21:50.489486 amazon-ssm-agent[2153]: 2026-01-17 00:21:49 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 17 00:21:50.489486 amazon-ssm-agent[2153]: 2026-01-17 00:21:49 INFO [amazon-ssm-agent] Starting Core Agent Jan 17 00:21:50.489486 amazon-ssm-agent[2153]: 2026-01-17 00:21:49 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 17 00:21:50.489486 amazon-ssm-agent[2153]: 2026-01-17 00:21:49 INFO [Registrar] Starting registrar module Jan 17 00:21:50.489763 amazon-ssm-agent[2153]: 2026-01-17 00:21:49 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 17 00:21:50.489763 amazon-ssm-agent[2153]: 2026-01-17 00:21:50 INFO [EC2Identity] EC2 registration was successful. Jan 17 00:21:50.489763 amazon-ssm-agent[2153]: 2026-01-17 00:21:50 INFO [CredentialRefresher] credentialRefresher has started Jan 17 00:21:50.489763 amazon-ssm-agent[2153]: 2026-01-17 00:21:50 INFO [CredentialRefresher] Starting credentials refresher loop Jan 17 00:21:50.489763 amazon-ssm-agent[2153]: 2026-01-17 00:21:50 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 17 00:21:50.580181 amazon-ssm-agent[2153]: 2026-01-17 00:21:50 INFO [CredentialRefresher] Next credential rotation will be in 30.191660210983333 minutes Jan 17 00:21:50.692814 tar[2096]: linux-amd64/README.md Jan 17 00:21:50.707633 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:21:51.507988 amazon-ssm-agent[2153]: 2026-01-17 00:21:51 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 17 00:21:51.608493 amazon-ssm-agent[2153]: 2026-01-17 00:21:51 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2331) started Jan 17 00:21:51.709047 amazon-ssm-agent[2153]: 2026-01-17 00:21:51 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 17 00:21:51.927363 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:21:51.932580 systemd[1]: Started sshd@0-172.31.29.247:22-4.153.228.146:59108.service - OpenSSH per-connection server daemon (4.153.228.146:59108). Jan 17 00:21:51.937427 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:21:51.944202 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:21:51.944814 systemd[1]: Startup finished in 7.143s (kernel) + 7.847s (userspace) = 14.990s. Jan 17 00:21:51.947641 (kubelet)[2350]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:21:52.469915 sshd[2348]: Accepted publickey for core from 4.153.228.146 port 59108 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:21:52.472904 sshd[2348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:52.483486 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:21:52.490592 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:21:52.494415 systemd-logind[2080]: New session 1 of user core. Jan 17 00:21:52.512567 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:21:52.524669 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:21:52.530608 (systemd)[2365]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:21:52.678667 systemd[2365]: Queued start job for default target default.target. Jan 17 00:21:52.679940 systemd[2365]: Created slice app.slice - User Application Slice. Jan 17 00:21:52.679985 systemd[2365]: Reached target paths.target - Paths. Jan 17 00:21:52.680005 systemd[2365]: Reached target timers.target - Timers. Jan 17 00:21:52.687582 systemd[2365]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:21:52.696357 systemd[2365]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:21:52.697687 systemd[2365]: Reached target sockets.target - Sockets. Jan 17 00:21:52.697804 systemd[2365]: Reached target basic.target - Basic System. Jan 17 00:21:52.698120 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:21:52.698277 systemd[2365]: Reached target default.target - Main User Target. Jan 17 00:21:52.698960 systemd[2365]: Startup finished in 160ms. Jan 17 00:21:52.706391 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:21:53.102306 systemd[1]: Started sshd@1-172.31.29.247:22-4.153.228.146:59116.service - OpenSSH per-connection server daemon (4.153.228.146:59116). Jan 17 00:21:53.113309 kubelet[2350]: E0117 00:21:53.113191 2350 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:21:53.116961 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:21:53.117288 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:21:53.619082 sshd[2378]: Accepted publickey for core from 4.153.228.146 port 59116 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:21:53.620891 sshd[2378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:53.627252 systemd-logind[2080]: New session 2 of user core. Jan 17 00:21:53.632998 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:21:53.993118 sshd[2378]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:53.996117 systemd[1]: sshd@1-172.31.29.247:22-4.153.228.146:59116.service: Deactivated successfully. Jan 17 00:21:54.000225 systemd-logind[2080]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:21:54.002327 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:21:54.003503 systemd-logind[2080]: Removed session 2. Jan 17 00:21:54.068720 systemd[1]: Started sshd@2-172.31.29.247:22-4.153.228.146:59122.service - OpenSSH per-connection server daemon (4.153.228.146:59122). Jan 17 00:21:54.546631 sshd[2388]: Accepted publickey for core from 4.153.228.146 port 59122 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:21:54.548192 sshd[2388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:54.554520 systemd-logind[2080]: New session 3 of user core. Jan 17 00:21:54.560719 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:21:54.892050 sshd[2388]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:54.896173 systemd[1]: sshd@2-172.31.29.247:22-4.153.228.146:59122.service: Deactivated successfully. Jan 17 00:21:54.901455 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:21:54.901531 systemd-logind[2080]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:21:54.903331 systemd-logind[2080]: Removed session 3. Jan 17 00:21:54.987450 systemd[1]: Started sshd@3-172.31.29.247:22-4.153.228.146:52706.service - OpenSSH per-connection server daemon (4.153.228.146:52706). Jan 17 00:21:55.513160 sshd[2396]: Accepted publickey for core from 4.153.228.146 port 52706 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:21:55.514593 sshd[2396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:55.521238 systemd-logind[2080]: New session 4 of user core. Jan 17 00:21:55.534399 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:21:56.152269 systemd-resolved[1998]: Clock change detected. Flushing caches. Jan 17 00:21:56.276632 sshd[2396]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:56.279763 systemd[1]: sshd@3-172.31.29.247:22-4.153.228.146:52706.service: Deactivated successfully. Jan 17 00:21:56.282754 systemd-logind[2080]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:21:56.283265 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:21:56.285171 systemd-logind[2080]: Removed session 4. Jan 17 00:21:56.369392 systemd[1]: Started sshd@4-172.31.29.247:22-4.153.228.146:52708.service - OpenSSH per-connection server daemon (4.153.228.146:52708). Jan 17 00:21:56.895024 sshd[2404]: Accepted publickey for core from 4.153.228.146 port 52708 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:21:56.896641 sshd[2404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:56.903723 systemd-logind[2080]: New session 5 of user core. Jan 17 00:21:56.909488 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:21:57.209311 sudo[2408]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:21:57.209614 sudo[2408]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:21:57.219991 sudo[2408]: pam_unix(sudo:session): session closed for user root Jan 17 00:21:57.305123 sshd[2404]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:57.309537 systemd[1]: sshd@4-172.31.29.247:22-4.153.228.146:52708.service: Deactivated successfully. Jan 17 00:21:57.315356 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:21:57.315813 systemd-logind[2080]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:21:57.317482 systemd-logind[2080]: Removed session 5. Jan 17 00:21:57.404742 systemd[1]: Started sshd@5-172.31.29.247:22-4.153.228.146:52710.service - OpenSSH per-connection server daemon (4.153.228.146:52710). Jan 17 00:21:57.955640 sshd[2413]: Accepted publickey for core from 4.153.228.146 port 52710 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:21:57.957273 sshd[2413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:57.970589 systemd-logind[2080]: New session 6 of user core. Jan 17 00:21:57.977452 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:21:58.247802 sudo[2418]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:21:58.248136 sudo[2418]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:21:58.252740 sudo[2418]: pam_unix(sudo:session): session closed for user root Jan 17 00:21:58.258612 sudo[2417]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:21:58.259005 sudo[2417]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:21:58.273402 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:21:58.277752 auditctl[2421]: No rules Jan 17 00:21:58.278557 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:21:58.278906 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:21:58.291342 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:21:58.319401 augenrules[2440]: No rules Jan 17 00:21:58.321429 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:21:58.325518 sudo[2417]: pam_unix(sudo:session): session closed for user root Jan 17 00:21:58.408684 sshd[2413]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:58.412462 systemd[1]: sshd@5-172.31.29.247:22-4.153.228.146:52710.service: Deactivated successfully. Jan 17 00:21:58.417499 systemd-logind[2080]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:21:58.418223 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:21:58.419740 systemd-logind[2080]: Removed session 6. Jan 17 00:21:58.504487 systemd[1]: Started sshd@6-172.31.29.247:22-4.153.228.146:52722.service - OpenSSH per-connection server daemon (4.153.228.146:52722). Jan 17 00:21:59.021637 sshd[2449]: Accepted publickey for core from 4.153.228.146 port 52722 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:21:59.023757 sshd[2449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:59.039723 systemd-logind[2080]: New session 7 of user core. Jan 17 00:21:59.044649 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:21:59.314946 sudo[2453]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:21:59.315394 sudo[2453]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:21:59.732377 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:21:59.733529 (dockerd)[2469]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:22:00.273656 dockerd[2469]: time="2026-01-17T00:22:00.273586286Z" level=info msg="Starting up" Jan 17 00:22:00.436562 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport536079287-merged.mount: Deactivated successfully. Jan 17 00:22:00.890075 dockerd[2469]: time="2026-01-17T00:22:00.889961354Z" level=info msg="Loading containers: start." Jan 17 00:22:01.039088 kernel: Initializing XFRM netlink socket Jan 17 00:22:01.091379 (udev-worker)[2491]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:22:01.182585 systemd-networkd[1659]: docker0: Link UP Jan 17 00:22:01.245429 dockerd[2469]: time="2026-01-17T00:22:01.245378893Z" level=info msg="Loading containers: done." Jan 17 00:22:01.320727 dockerd[2469]: time="2026-01-17T00:22:01.320309397Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:22:01.320727 dockerd[2469]: time="2026-01-17T00:22:01.320429816Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:22:01.320727 dockerd[2469]: time="2026-01-17T00:22:01.320538547Z" level=info msg="Daemon has completed initialization" Jan 17 00:22:01.496158 dockerd[2469]: time="2026-01-17T00:22:01.495661433Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:22:01.496243 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:22:03.337729 containerd[2105]: time="2026-01-17T00:22:03.337676214Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 17 00:22:03.746600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:22:03.763988 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:22:04.253506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3109494300.mount: Deactivated successfully. Jan 17 00:22:04.331597 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:22:04.336338 (kubelet)[2626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:22:04.429121 kubelet[2626]: E0117 00:22:04.429062 2626 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:22:04.436236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:22:04.436530 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:22:06.043495 containerd[2105]: time="2026-01-17T00:22:06.043438055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:06.045143 containerd[2105]: time="2026-01-17T00:22:06.045091230Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070647" Jan 17 00:22:06.046206 containerd[2105]: time="2026-01-17T00:22:06.046140276Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:06.050087 containerd[2105]: time="2026-01-17T00:22:06.049612752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:06.051797 containerd[2105]: time="2026-01-17T00:22:06.050939725Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 2.713215164s" Jan 17 00:22:06.051797 containerd[2105]: time="2026-01-17T00:22:06.050989130Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 17 00:22:06.052193 containerd[2105]: time="2026-01-17T00:22:06.052167629Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 17 00:22:07.981680 containerd[2105]: time="2026-01-17T00:22:07.981618041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:07.983987 containerd[2105]: time="2026-01-17T00:22:07.983731620Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993354" Jan 17 00:22:07.985326 containerd[2105]: time="2026-01-17T00:22:07.984915107Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:07.988017 containerd[2105]: time="2026-01-17T00:22:07.987971871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:07.991569 containerd[2105]: time="2026-01-17T00:22:07.991515905Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.939226854s" Jan 17 00:22:07.991569 containerd[2105]: time="2026-01-17T00:22:07.991571752Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 17 00:22:07.993595 containerd[2105]: time="2026-01-17T00:22:07.993550188Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 17 00:22:09.556809 containerd[2105]: time="2026-01-17T00:22:09.556736525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:09.558038 containerd[2105]: time="2026-01-17T00:22:09.557979631Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405076" Jan 17 00:22:09.560818 containerd[2105]: time="2026-01-17T00:22:09.560736249Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:09.564066 containerd[2105]: time="2026-01-17T00:22:09.564019518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:09.565010 containerd[2105]: time="2026-01-17T00:22:09.564861914Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.571090539s" Jan 17 00:22:09.565010 containerd[2105]: time="2026-01-17T00:22:09.564895653Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 17 00:22:09.565916 containerd[2105]: time="2026-01-17T00:22:09.565873732Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 17 00:22:10.703422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2815210344.mount: Deactivated successfully. Jan 17 00:22:11.370788 containerd[2105]: time="2026-01-17T00:22:11.370722061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:11.372270 containerd[2105]: time="2026-01-17T00:22:11.372072925Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 17 00:22:11.374079 containerd[2105]: time="2026-01-17T00:22:11.373348151Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:11.375884 containerd[2105]: time="2026-01-17T00:22:11.375848524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:11.376706 containerd[2105]: time="2026-01-17T00:22:11.376670562Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.810762665s" Jan 17 00:22:11.376787 containerd[2105]: time="2026-01-17T00:22:11.376713123Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 17 00:22:11.377554 containerd[2105]: time="2026-01-17T00:22:11.377508603Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 17 00:22:11.873882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1732580915.mount: Deactivated successfully. Jan 17 00:22:12.883299 containerd[2105]: time="2026-01-17T00:22:12.883240572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:12.884396 containerd[2105]: time="2026-01-17T00:22:12.884332654Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 17 00:22:12.885633 containerd[2105]: time="2026-01-17T00:22:12.885197867Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:12.888831 containerd[2105]: time="2026-01-17T00:22:12.888799192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:12.889898 containerd[2105]: time="2026-01-17T00:22:12.889869393Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.512322403s" Jan 17 00:22:12.890001 containerd[2105]: time="2026-01-17T00:22:12.889987658Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 17 00:22:12.892590 containerd[2105]: time="2026-01-17T00:22:12.892561909Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 00:22:13.360419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1792510955.mount: Deactivated successfully. Jan 17 00:22:13.368001 containerd[2105]: time="2026-01-17T00:22:13.367107973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:13.368156 containerd[2105]: time="2026-01-17T00:22:13.368086798Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 17 00:22:13.369390 containerd[2105]: time="2026-01-17T00:22:13.369338063Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:13.372139 containerd[2105]: time="2026-01-17T00:22:13.371421451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:13.372139 containerd[2105]: time="2026-01-17T00:22:13.372011939Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 479.416894ms" Jan 17 00:22:13.372139 containerd[2105]: time="2026-01-17T00:22:13.372037521Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 17 00:22:13.372713 containerd[2105]: time="2026-01-17T00:22:13.372683500Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 17 00:22:13.920570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1612631838.mount: Deactivated successfully. Jan 17 00:22:14.687128 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:22:14.694440 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:22:15.508353 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:22:15.510564 (kubelet)[2823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:22:15.628486 kubelet[2823]: E0117 00:22:15.628400 2823 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:22:15.632976 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:22:15.633760 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:22:16.925921 containerd[2105]: time="2026-01-17T00:22:16.925849621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:16.927928 containerd[2105]: time="2026-01-17T00:22:16.927866083Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Jan 17 00:22:16.930491 containerd[2105]: time="2026-01-17T00:22:16.930426258Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:16.934956 containerd[2105]: time="2026-01-17T00:22:16.934790758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:16.937096 containerd[2105]: time="2026-01-17T00:22:16.936329906Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.563611089s" Jan 17 00:22:16.937096 containerd[2105]: time="2026-01-17T00:22:16.936381492Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 17 00:22:20.136838 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:22:20.143435 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:22:20.186073 systemd[1]: Reloading requested from client PID 2861 ('systemctl') (unit session-7.scope)... Jan 17 00:22:20.186090 systemd[1]: Reloading... Jan 17 00:22:20.316157 zram_generator::config[2902]: No configuration found. Jan 17 00:22:20.489968 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:22:20.573544 systemd[1]: Reloading finished in 386 ms. Jan 17 00:22:20.601606 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 17 00:22:20.630260 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 00:22:20.630447 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 00:22:20.631342 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:22:20.641511 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:22:20.900271 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:22:20.912540 (kubelet)[2979]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:22:20.974351 kubelet[2979]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:22:20.975088 kubelet[2979]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:22:20.975088 kubelet[2979]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:22:20.975088 kubelet[2979]: I0117 00:22:20.974960 2979 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:22:21.499898 kubelet[2979]: I0117 00:22:21.499822 2979 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:22:21.499898 kubelet[2979]: I0117 00:22:21.499874 2979 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:22:21.500230 kubelet[2979]: I0117 00:22:21.500188 2979 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:22:21.554267 kubelet[2979]: I0117 00:22:21.554206 2979 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:22:21.556001 kubelet[2979]: E0117 00:22:21.555724 2979 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.29.247:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.29.247:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:22:21.580035 kubelet[2979]: E0117 00:22:21.579954 2979 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:22:21.580035 kubelet[2979]: I0117 00:22:21.580022 2979 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:22:21.585654 kubelet[2979]: I0117 00:22:21.585612 2979 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:22:21.588260 kubelet[2979]: I0117 00:22:21.588181 2979 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:22:21.588446 kubelet[2979]: I0117 00:22:21.588241 2979 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-247","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 17 00:22:21.591339 kubelet[2979]: I0117 00:22:21.591288 2979 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:22:21.591339 kubelet[2979]: I0117 00:22:21.591327 2979 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:22:21.592796 kubelet[2979]: I0117 00:22:21.592743 2979 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:22:21.600109 kubelet[2979]: I0117 00:22:21.599666 2979 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:22:21.600109 kubelet[2979]: I0117 00:22:21.599713 2979 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:22:21.600109 kubelet[2979]: I0117 00:22:21.599738 2979 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:22:21.600109 kubelet[2979]: I0117 00:22:21.599750 2979 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:22:21.603963 kubelet[2979]: W0117 00:22:21.603899 2979 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.29.247:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-247&limit=500&resourceVersion=0": dial tcp 172.31.29.247:6443: connect: connection refused Jan 17 00:22:21.603963 kubelet[2979]: E0117 00:22:21.603957 2979 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.29.247:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-247&limit=500&resourceVersion=0\": dial tcp 172.31.29.247:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:22:21.604362 kubelet[2979]: W0117 00:22:21.604303 2979 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.29.247:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.29.247:6443: connect: connection refused Jan 17 00:22:21.604362 kubelet[2979]: E0117 00:22:21.604354 2979 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.29.247:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.247:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:22:21.606501 kubelet[2979]: I0117 00:22:21.605950 2979 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:22:21.613120 kubelet[2979]: I0117 00:22:21.613005 2979 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:22:21.613120 kubelet[2979]: W0117 00:22:21.613089 2979 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:22:21.616265 kubelet[2979]: I0117 00:22:21.616100 2979 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:22:21.616265 kubelet[2979]: I0117 00:22:21.616132 2979 server.go:1287] "Started kubelet" Jan 17 00:22:21.617420 kubelet[2979]: I0117 00:22:21.616557 2979 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:22:21.623714 kubelet[2979]: I0117 00:22:21.623642 2979 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:22:21.624236 kubelet[2979]: I0117 00:22:21.624034 2979 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:22:21.628066 kubelet[2979]: I0117 00:22:21.626924 2979 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:22:21.631269 kubelet[2979]: E0117 00:22:21.627791 2979 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.247:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.247:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-247.188b5cd361774f0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-247,UID:ip-172-31-29-247,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-247,},FirstTimestamp:2026-01-17 00:22:21.616115468 +0000 UTC m=+0.699486271,LastTimestamp:2026-01-17 00:22:21.616115468 +0000 UTC m=+0.699486271,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-247,}" Jan 17 00:22:21.633291 kubelet[2979]: I0117 00:22:21.633268 2979 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:22:21.635831 kubelet[2979]: I0117 00:22:21.635811 2979 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:22:21.637492 kubelet[2979]: I0117 00:22:21.637058 2979 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:22:21.638607 kubelet[2979]: I0117 00:22:21.637069 2979 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:22:21.638689 kubelet[2979]: E0117 00:22:21.637217 2979 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-29-247\" not found" Jan 17 00:22:21.638987 kubelet[2979]: I0117 00:22:21.638976 2979 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:22:21.639594 kubelet[2979]: E0117 00:22:21.639571 2979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-247?timeout=10s\": dial tcp 172.31.29.247:6443: connect: connection refused" interval="200ms" Jan 17 00:22:21.640203 kubelet[2979]: W0117 00:22:21.640170 2979 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.29.247:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.247:6443: connect: connection refused Jan 17 00:22:21.640601 kubelet[2979]: E0117 00:22:21.640283 2979 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.29.247:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.247:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:22:21.642547 kubelet[2979]: I0117 00:22:21.642527 2979 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:22:21.642817 kubelet[2979]: I0117 00:22:21.642705 2979 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:22:21.646491 kubelet[2979]: I0117 00:22:21.646468 2979 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:22:21.658144 kubelet[2979]: I0117 00:22:21.658102 2979 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:22:21.661388 kubelet[2979]: E0117 00:22:21.661362 2979 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:22:21.667555 kubelet[2979]: I0117 00:22:21.667518 2979 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:22:21.667691 kubelet[2979]: I0117 00:22:21.667683 2979 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:22:21.667745 kubelet[2979]: I0117 00:22:21.667739 2979 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:22:21.667785 kubelet[2979]: I0117 00:22:21.667781 2979 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:22:21.667886 kubelet[2979]: E0117 00:22:21.667861 2979 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:22:21.675806 kubelet[2979]: W0117 00:22:21.675774 2979 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.29.247:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.247:6443: connect: connection refused Jan 17 00:22:21.675925 kubelet[2979]: E0117 00:22:21.675813 2979 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.29.247:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.247:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:22:21.677152 kubelet[2979]: I0117 00:22:21.676929 2979 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:22:21.677152 kubelet[2979]: I0117 00:22:21.676942 2979 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:22:21.677152 kubelet[2979]: I0117 00:22:21.676967 2979 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:22:21.679535 kubelet[2979]: I0117 00:22:21.679505 2979 policy_none.go:49] "None policy: Start" Jan 17 00:22:21.679535 kubelet[2979]: I0117 00:22:21.679534 2979 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:22:21.679667 kubelet[2979]: I0117 00:22:21.679545 2979 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:22:21.686073 kubelet[2979]: I0117 00:22:21.685037 2979 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:22:21.686073 kubelet[2979]: I0117 00:22:21.685229 2979 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:22:21.686073 kubelet[2979]: I0117 00:22:21.685239 2979 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:22:21.686325 kubelet[2979]: I0117 00:22:21.686314 2979 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:22:21.687782 kubelet[2979]: E0117 00:22:21.687753 2979 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:22:21.688081 kubelet[2979]: E0117 00:22:21.688066 2979 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-247\" not found" Jan 17 00:22:21.777002 kubelet[2979]: E0117 00:22:21.776889 2979 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-247\" not found" node="ip-172-31-29-247" Jan 17 00:22:21.780956 kubelet[2979]: E0117 00:22:21.780928 2979 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-247\" not found" node="ip-172-31-29-247" Jan 17 00:22:21.786948 kubelet[2979]: E0117 00:22:21.786917 2979 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-247\" not found" node="ip-172-31-29-247" Jan 17 00:22:21.789594 kubelet[2979]: I0117 00:22:21.789558 2979 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-247" Jan 17 00:22:21.790183 kubelet[2979]: E0117 00:22:21.790156 2979 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.247:6443/api/v1/nodes\": dial tcp 172.31.29.247:6443: connect: connection refused" node="ip-172-31-29-247" Jan 17 00:22:21.840387 kubelet[2979]: I0117 00:22:21.840347 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/31ce131a411b4100c6f86ea5a62895da-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-247\" (UID: \"31ce131a411b4100c6f86ea5a62895da\") " pod="kube-system/kube-controller-manager-ip-172-31-29-247" Jan 17 00:22:21.840561 kubelet[2979]: I0117 00:22:21.840545 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8128d22e87257b9c9753be88c2dfd7ef-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-247\" (UID: \"8128d22e87257b9c9753be88c2dfd7ef\") " pod="kube-system/kube-scheduler-ip-172-31-29-247" Jan 17 00:22:21.840633 kubelet[2979]: I0117 00:22:21.840624 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d50095088f5dfa0a66edc98d4855509c-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-247\" (UID: \"d50095088f5dfa0a66edc98d4855509c\") " pod="kube-system/kube-apiserver-ip-172-31-29-247" Jan 17 00:22:21.840713 kubelet[2979]: I0117 00:22:21.840685 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d50095088f5dfa0a66edc98d4855509c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-247\" (UID: \"d50095088f5dfa0a66edc98d4855509c\") " pod="kube-system/kube-apiserver-ip-172-31-29-247" Jan 17 00:22:21.840713 kubelet[2979]: I0117 00:22:21.840710 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/31ce131a411b4100c6f86ea5a62895da-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-247\" (UID: \"31ce131a411b4100c6f86ea5a62895da\") " pod="kube-system/kube-controller-manager-ip-172-31-29-247" Jan 17 00:22:21.840713 kubelet[2979]: I0117 00:22:21.840725 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/31ce131a411b4100c6f86ea5a62895da-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-247\" (UID: \"31ce131a411b4100c6f86ea5a62895da\") " pod="kube-system/kube-controller-manager-ip-172-31-29-247" Jan 17 00:22:21.840713 kubelet[2979]: E0117 00:22:21.840630 2979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-247?timeout=10s\": dial tcp 172.31.29.247:6443: connect: connection refused" interval="400ms" Jan 17 00:22:21.840713 kubelet[2979]: I0117 00:22:21.840742 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d50095088f5dfa0a66edc98d4855509c-ca-certs\") pod \"kube-apiserver-ip-172-31-29-247\" (UID: \"d50095088f5dfa0a66edc98d4855509c\") " pod="kube-system/kube-apiserver-ip-172-31-29-247" Jan 17 00:22:21.840995 kubelet[2979]: I0117 00:22:21.840757 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/31ce131a411b4100c6f86ea5a62895da-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-247\" (UID: \"31ce131a411b4100c6f86ea5a62895da\") " pod="kube-system/kube-controller-manager-ip-172-31-29-247" Jan 17 00:22:21.840995 kubelet[2979]: I0117 00:22:21.840771 2979 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/31ce131a411b4100c6f86ea5a62895da-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-247\" (UID: \"31ce131a411b4100c6f86ea5a62895da\") " pod="kube-system/kube-controller-manager-ip-172-31-29-247" Jan 17 00:22:21.993958 kubelet[2979]: I0117 00:22:21.993908 2979 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-247" Jan 17 00:22:21.994336 kubelet[2979]: E0117 00:22:21.994232 2979 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.247:6443/api/v1/nodes\": dial tcp 172.31.29.247:6443: connect: connection refused" node="ip-172-31-29-247" Jan 17 00:22:22.080397 containerd[2105]: time="2026-01-17T00:22:22.080278263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-247,Uid:d50095088f5dfa0a66edc98d4855509c,Namespace:kube-system,Attempt:0,}" Jan 17 00:22:22.082248 containerd[2105]: time="2026-01-17T00:22:22.082207173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-247,Uid:31ce131a411b4100c6f86ea5a62895da,Namespace:kube-system,Attempt:0,}" Jan 17 00:22:22.090885 containerd[2105]: time="2026-01-17T00:22:22.090698225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-247,Uid:8128d22e87257b9c9753be88c2dfd7ef,Namespace:kube-system,Attempt:0,}" Jan 17 00:22:22.241664 kubelet[2979]: E0117 00:22:22.241625 2979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-247?timeout=10s\": dial tcp 172.31.29.247:6443: connect: connection refused" interval="800ms" Jan 17 00:22:22.396516 kubelet[2979]: I0117 00:22:22.396423 2979 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-247" Jan 17 00:22:22.396937 kubelet[2979]: E0117 00:22:22.396794 2979 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.247:6443/api/v1/nodes\": dial tcp 172.31.29.247:6443: connect: connection refused" node="ip-172-31-29-247" Jan 17 00:22:22.446008 kubelet[2979]: W0117 00:22:22.445946 2979 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.29.247:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.29.247:6443: connect: connection refused Jan 17 00:22:22.446008 kubelet[2979]: E0117 00:22:22.446014 2979 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.29.247:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.247:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:22:22.535844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3650134190.mount: Deactivated successfully. Jan 17 00:22:22.542269 containerd[2105]: time="2026-01-17T00:22:22.542200625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:22:22.543325 containerd[2105]: time="2026-01-17T00:22:22.543282818Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 00:22:22.544132 containerd[2105]: time="2026-01-17T00:22:22.544092810Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:22:22.545578 containerd[2105]: time="2026-01-17T00:22:22.545507742Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:22:22.546980 containerd[2105]: time="2026-01-17T00:22:22.546933430Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:22:22.547899 containerd[2105]: time="2026-01-17T00:22:22.547853697Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:22:22.548476 containerd[2105]: time="2026-01-17T00:22:22.548430460Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:22:22.552076 containerd[2105]: time="2026-01-17T00:22:22.550593869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:22:22.552404 containerd[2105]: time="2026-01-17T00:22:22.552366903Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 471.999214ms" Jan 17 00:22:22.554271 containerd[2105]: time="2026-01-17T00:22:22.553681038Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 462.750823ms" Jan 17 00:22:22.560468 containerd[2105]: time="2026-01-17T00:22:22.560425820Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 477.939914ms" Jan 17 00:22:22.727267 containerd[2105]: time="2026-01-17T00:22:22.726824096Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:22:22.727267 containerd[2105]: time="2026-01-17T00:22:22.726901082Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:22:22.727267 containerd[2105]: time="2026-01-17T00:22:22.726917230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:22.727267 containerd[2105]: time="2026-01-17T00:22:22.727021253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:22.745185 containerd[2105]: time="2026-01-17T00:22:22.745038791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:22:22.745349 containerd[2105]: time="2026-01-17T00:22:22.745234681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:22:22.745702 containerd[2105]: time="2026-01-17T00:22:22.745644542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:22.747373 containerd[2105]: time="2026-01-17T00:22:22.746736485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:22.752856 containerd[2105]: time="2026-01-17T00:22:22.750312916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:22:22.752856 containerd[2105]: time="2026-01-17T00:22:22.750379327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:22:22.752856 containerd[2105]: time="2026-01-17T00:22:22.750405955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:22.752856 containerd[2105]: time="2026-01-17T00:22:22.750518010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:22.846079 kubelet[2979]: W0117 00:22:22.845803 2979 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.29.247:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.247:6443: connect: connection refused Jan 17 00:22:22.846079 kubelet[2979]: E0117 00:22:22.845880 2979 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.29.247:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.247:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:22:22.869535 containerd[2105]: time="2026-01-17T00:22:22.869456082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-247,Uid:d50095088f5dfa0a66edc98d4855509c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca4c9c77357947174d9a9afb588a48753352f189b15db6ac56a0eab76b4e3a65\"" Jan 17 00:22:22.881275 containerd[2105]: time="2026-01-17T00:22:22.881091972Z" level=info msg="CreateContainer within sandbox \"ca4c9c77357947174d9a9afb588a48753352f189b15db6ac56a0eab76b4e3a65\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:22:22.885218 containerd[2105]: time="2026-01-17T00:22:22.884859395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-247,Uid:31ce131a411b4100c6f86ea5a62895da,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc330af4c8c38670d2520a4dafa3682938f315eeee7d5a9de5aa21935f4cab54\"" Jan 17 00:22:22.890424 containerd[2105]: time="2026-01-17T00:22:22.890374151Z" level=info msg="CreateContainer within sandbox \"dc330af4c8c38670d2520a4dafa3682938f315eeee7d5a9de5aa21935f4cab54\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:22:22.895081 containerd[2105]: time="2026-01-17T00:22:22.894695139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-247,Uid:8128d22e87257b9c9753be88c2dfd7ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"da560e845827b24ab20ad4fae82e1427080e3d506e65a048f05dbf18966244b1\"" Jan 17 00:22:22.901175 containerd[2105]: time="2026-01-17T00:22:22.901141414Z" level=info msg="CreateContainer within sandbox \"da560e845827b24ab20ad4fae82e1427080e3d506e65a048f05dbf18966244b1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:22:22.930469 containerd[2105]: time="2026-01-17T00:22:22.930429209Z" level=info msg="CreateContainer within sandbox \"ca4c9c77357947174d9a9afb588a48753352f189b15db6ac56a0eab76b4e3a65\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"11520614b355d9784209556e2d266afcf3e6c0f901bef4dc6bdd76a6535e8b3f\"" Jan 17 00:22:22.931513 containerd[2105]: time="2026-01-17T00:22:22.931488193Z" level=info msg="StartContainer for \"11520614b355d9784209556e2d266afcf3e6c0f901bef4dc6bdd76a6535e8b3f\"" Jan 17 00:22:22.946238 containerd[2105]: time="2026-01-17T00:22:22.946089928Z" level=info msg="CreateContainer within sandbox \"dc330af4c8c38670d2520a4dafa3682938f315eeee7d5a9de5aa21935f4cab54\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"17e15b402ec85579f1b9f4a4f7dbfa93d519e94f6e07d8ff19a951f3b026aab3\"" Jan 17 00:22:22.947116 containerd[2105]: time="2026-01-17T00:22:22.947036376Z" level=info msg="StartContainer for \"17e15b402ec85579f1b9f4a4f7dbfa93d519e94f6e07d8ff19a951f3b026aab3\"" Jan 17 00:22:22.952160 containerd[2105]: time="2026-01-17T00:22:22.952112760Z" level=info msg="CreateContainer within sandbox \"da560e845827b24ab20ad4fae82e1427080e3d506e65a048f05dbf18966244b1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"68927b667f5f5b36f641178435eee463d1973824fb088b35f3ae246c2ccab9bd\"" Jan 17 00:22:22.953103 containerd[2105]: time="2026-01-17T00:22:22.953075187Z" level=info msg="StartContainer for \"68927b667f5f5b36f641178435eee463d1973824fb088b35f3ae246c2ccab9bd\"" Jan 17 00:22:23.045222 kubelet[2979]: E0117 00:22:23.042714 2979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-247?timeout=10s\": dial tcp 172.31.29.247:6443: connect: connection refused" interval="1.6s" Jan 17 00:22:23.048684 kubelet[2979]: W0117 00:22:23.048545 2979 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.29.247:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-247&limit=500&resourceVersion=0": dial tcp 172.31.29.247:6443: connect: connection refused Jan 17 00:22:23.048907 kubelet[2979]: E0117 00:22:23.048882 2979 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.29.247:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-247&limit=500&resourceVersion=0\": dial tcp 172.31.29.247:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:22:23.081197 containerd[2105]: time="2026-01-17T00:22:23.080106059Z" level=info msg="StartContainer for \"11520614b355d9784209556e2d266afcf3e6c0f901bef4dc6bdd76a6535e8b3f\" returns successfully" Jan 17 00:22:23.095942 containerd[2105]: time="2026-01-17T00:22:23.095900834Z" level=info msg="StartContainer for \"17e15b402ec85579f1b9f4a4f7dbfa93d519e94f6e07d8ff19a951f3b026aab3\" returns successfully" Jan 17 00:22:23.125115 containerd[2105]: time="2026-01-17T00:22:23.125035140Z" level=info msg="StartContainer for \"68927b667f5f5b36f641178435eee463d1973824fb088b35f3ae246c2ccab9bd\" returns successfully" Jan 17 00:22:23.175280 kubelet[2979]: W0117 00:22:23.175141 2979 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.29.247:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.247:6443: connect: connection refused Jan 17 00:22:23.175280 kubelet[2979]: E0117 00:22:23.175236 2979 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.29.247:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.247:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:22:23.199439 kubelet[2979]: I0117 00:22:23.199244 2979 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-247" Jan 17 00:22:23.200069 kubelet[2979]: E0117 00:22:23.199998 2979 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.247:6443/api/v1/nodes\": dial tcp 172.31.29.247:6443: connect: connection refused" node="ip-172-31-29-247" Jan 17 00:22:23.682466 kubelet[2979]: E0117 00:22:23.682166 2979 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.29.247:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.29.247:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:22:23.692121 kubelet[2979]: E0117 00:22:23.691102 2979 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-247\" not found" node="ip-172-31-29-247" Jan 17 00:22:23.704599 kubelet[2979]: E0117 00:22:23.704317 2979 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-247\" not found" node="ip-172-31-29-247" Jan 17 00:22:23.706733 kubelet[2979]: E0117 00:22:23.706566 2979 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-247\" not found" node="ip-172-31-29-247" Jan 17 00:22:24.246064 kubelet[2979]: W0117 00:22:24.245218 2979 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.29.247:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.29.247:6443: connect: connection refused Jan 17 00:22:24.246064 kubelet[2979]: E0117 00:22:24.245271 2979 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.29.247:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.247:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:22:24.649782 kubelet[2979]: E0117 00:22:24.649637 2979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-247?timeout=10s\": dial tcp 172.31.29.247:6443: connect: connection refused" interval="3.2s" Jan 17 00:22:24.708090 kubelet[2979]: E0117 00:22:24.707727 2979 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-247\" not found" node="ip-172-31-29-247" Jan 17 00:22:24.710063 kubelet[2979]: E0117 00:22:24.708615 2979 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-247\" not found" node="ip-172-31-29-247" Jan 17 00:22:24.804670 kubelet[2979]: I0117 00:22:24.803580 2979 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-247" Jan 17 00:22:25.707344 kubelet[2979]: E0117 00:22:25.707311 2979 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-247\" not found" node="ip-172-31-29-247" Jan 17 00:22:27.044101 kubelet[2979]: I0117 00:22:27.043831 2979 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-29-247" Jan 17 00:22:27.044101 kubelet[2979]: E0117 00:22:27.043877 2979 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-29-247\": node \"ip-172-31-29-247\" not found" Jan 17 00:22:27.140097 kubelet[2979]: I0117 00:22:27.137754 2979 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-247" Jan 17 00:22:27.144771 kubelet[2979]: E0117 00:22:27.144715 2979 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-29-247\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-29-247" Jan 17 00:22:27.144771 kubelet[2979]: I0117 00:22:27.144758 2979 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-29-247" Jan 17 00:22:27.148879 kubelet[2979]: E0117 00:22:27.146960 2979 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-29-247\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-29-247" Jan 17 00:22:27.148879 kubelet[2979]: I0117 00:22:27.146997 2979 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-247" Jan 17 00:22:27.149793 kubelet[2979]: E0117 00:22:27.149753 2979 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-29-247\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-29-247" Jan 17 00:22:27.607495 kubelet[2979]: I0117 00:22:27.607438 2979 apiserver.go:52] "Watching apiserver" Jan 17 00:22:27.639606 kubelet[2979]: I0117 00:22:27.639564 2979 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:22:28.923293 kubelet[2979]: I0117 00:22:28.923242 2979 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-29-247" Jan 17 00:22:29.378342 systemd[1]: Reloading requested from client PID 3250 ('systemctl') (unit session-7.scope)... Jan 17 00:22:29.378363 systemd[1]: Reloading... Jan 17 00:22:29.463322 zram_generator::config[3287]: No configuration found. Jan 17 00:22:29.638397 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:22:29.755707 systemd[1]: Reloading finished in 376 ms. Jan 17 00:22:29.805335 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:22:29.819409 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:22:29.819924 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:22:29.827321 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:22:30.207850 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:22:30.223217 (kubelet)[3360]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:22:30.312290 kubelet[3360]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:22:30.312290 kubelet[3360]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:22:30.312290 kubelet[3360]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:22:30.312878 kubelet[3360]: I0117 00:22:30.312402 3360 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:22:30.327177 kubelet[3360]: I0117 00:22:30.327117 3360 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:22:30.327177 kubelet[3360]: I0117 00:22:30.327154 3360 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:22:30.327442 kubelet[3360]: I0117 00:22:30.327425 3360 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:22:30.332075 kubelet[3360]: I0117 00:22:30.331982 3360 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 00:22:30.337993 kubelet[3360]: I0117 00:22:30.337475 3360 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:22:30.345386 kubelet[3360]: E0117 00:22:30.345337 3360 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:22:30.345540 kubelet[3360]: I0117 00:22:30.345399 3360 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:22:30.349421 kubelet[3360]: I0117 00:22:30.349378 3360 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:22:30.351907 kubelet[3360]: I0117 00:22:30.351118 3360 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:22:30.351907 kubelet[3360]: I0117 00:22:30.351182 3360 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-247","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 17 00:22:30.351907 kubelet[3360]: I0117 00:22:30.351417 3360 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:22:30.351907 kubelet[3360]: I0117 00:22:30.351432 3360 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:22:30.355092 kubelet[3360]: I0117 00:22:30.355036 3360 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:22:30.355375 kubelet[3360]: I0117 00:22:30.355358 3360 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:22:30.363523 kubelet[3360]: I0117 00:22:30.362999 3360 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:22:30.363523 kubelet[3360]: I0117 00:22:30.363093 3360 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:22:30.363523 kubelet[3360]: I0117 00:22:30.363110 3360 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:22:30.370406 kubelet[3360]: I0117 00:22:30.370239 3360 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:22:30.375557 kubelet[3360]: I0117 00:22:30.375513 3360 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:22:30.389179 kubelet[3360]: I0117 00:22:30.387682 3360 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:22:30.389179 kubelet[3360]: I0117 00:22:30.387729 3360 server.go:1287] "Started kubelet" Jan 17 00:22:30.389469 kubelet[3360]: I0117 00:22:30.389438 3360 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:22:30.390413 kubelet[3360]: I0117 00:22:30.390350 3360 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:22:30.391403 kubelet[3360]: I0117 00:22:30.391295 3360 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:22:30.394791 kubelet[3360]: I0117 00:22:30.394069 3360 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:22:30.401092 kubelet[3360]: I0117 00:22:30.400635 3360 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:22:30.405284 kubelet[3360]: I0117 00:22:30.405254 3360 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:22:30.415798 kubelet[3360]: I0117 00:22:30.413551 3360 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:22:30.415798 kubelet[3360]: I0117 00:22:30.414193 3360 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:22:30.415798 kubelet[3360]: I0117 00:22:30.414355 3360 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:22:30.420769 kubelet[3360]: I0117 00:22:30.420736 3360 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:22:30.420909 kubelet[3360]: I0117 00:22:30.420878 3360 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:22:30.424553 kubelet[3360]: I0117 00:22:30.424523 3360 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:22:30.426225 kubelet[3360]: E0117 00:22:30.426199 3360 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:22:30.429054 kubelet[3360]: I0117 00:22:30.428996 3360 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:22:30.430821 kubelet[3360]: I0117 00:22:30.430792 3360 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:22:30.430988 kubelet[3360]: I0117 00:22:30.430977 3360 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:22:30.431106 kubelet[3360]: I0117 00:22:30.431095 3360 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:22:30.431184 kubelet[3360]: I0117 00:22:30.431176 3360 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:22:30.431306 kubelet[3360]: E0117 00:22:30.431287 3360 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:22:30.522858 kubelet[3360]: I0117 00:22:30.522741 3360 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:22:30.522858 kubelet[3360]: I0117 00:22:30.522767 3360 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:22:30.522858 kubelet[3360]: I0117 00:22:30.522790 3360 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:22:30.523094 kubelet[3360]: I0117 00:22:30.522994 3360 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:22:30.523094 kubelet[3360]: I0117 00:22:30.523008 3360 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:22:30.523094 kubelet[3360]: I0117 00:22:30.523033 3360 policy_none.go:49] "None policy: Start" Jan 17 00:22:30.523094 kubelet[3360]: I0117 00:22:30.523060 3360 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:22:30.525260 kubelet[3360]: I0117 00:22:30.523108 3360 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:22:30.525260 kubelet[3360]: I0117 00:22:30.523300 3360 state_mem.go:75] "Updated machine memory state" Jan 17 00:22:30.525986 kubelet[3360]: I0117 00:22:30.525537 3360 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:22:30.525986 kubelet[3360]: I0117 00:22:30.525728 3360 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:22:30.525986 kubelet[3360]: I0117 00:22:30.525743 3360 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:22:30.531075 kubelet[3360]: I0117 00:22:30.530763 3360 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:22:30.533836 kubelet[3360]: I0117 00:22:30.533801 3360 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-247" Jan 17 00:22:30.536479 kubelet[3360]: E0117 00:22:30.536441 3360 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:22:30.539445 kubelet[3360]: I0117 00:22:30.538513 3360 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-29-247" Jan 17 00:22:30.540761 kubelet[3360]: I0117 00:22:30.540725 3360 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-247" Jan 17 00:22:30.553099 kubelet[3360]: E0117 00:22:30.552845 3360 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-29-247\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-29-247" Jan 17 00:22:30.620839 kubelet[3360]: I0117 00:22:30.620614 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8128d22e87257b9c9753be88c2dfd7ef-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-247\" (UID: \"8128d22e87257b9c9753be88c2dfd7ef\") " pod="kube-system/kube-scheduler-ip-172-31-29-247" Jan 17 00:22:30.620839 kubelet[3360]: I0117 00:22:30.620655 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d50095088f5dfa0a66edc98d4855509c-ca-certs\") pod \"kube-apiserver-ip-172-31-29-247\" (UID: \"d50095088f5dfa0a66edc98d4855509c\") " pod="kube-system/kube-apiserver-ip-172-31-29-247" Jan 17 00:22:30.620839 kubelet[3360]: I0117 00:22:30.620673 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d50095088f5dfa0a66edc98d4855509c-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-247\" (UID: \"d50095088f5dfa0a66edc98d4855509c\") " pod="kube-system/kube-apiserver-ip-172-31-29-247" Jan 17 00:22:30.620839 kubelet[3360]: I0117 00:22:30.620689 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d50095088f5dfa0a66edc98d4855509c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-247\" (UID: \"d50095088f5dfa0a66edc98d4855509c\") " pod="kube-system/kube-apiserver-ip-172-31-29-247" Jan 17 00:22:30.620839 kubelet[3360]: I0117 00:22:30.620708 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/31ce131a411b4100c6f86ea5a62895da-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-247\" (UID: \"31ce131a411b4100c6f86ea5a62895da\") " pod="kube-system/kube-controller-manager-ip-172-31-29-247" Jan 17 00:22:30.621098 kubelet[3360]: I0117 00:22:30.620723 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/31ce131a411b4100c6f86ea5a62895da-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-247\" (UID: \"31ce131a411b4100c6f86ea5a62895da\") " pod="kube-system/kube-controller-manager-ip-172-31-29-247" Jan 17 00:22:30.621098 kubelet[3360]: I0117 00:22:30.620738 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/31ce131a411b4100c6f86ea5a62895da-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-247\" (UID: \"31ce131a411b4100c6f86ea5a62895da\") " pod="kube-system/kube-controller-manager-ip-172-31-29-247" Jan 17 00:22:30.621098 kubelet[3360]: I0117 00:22:30.620754 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/31ce131a411b4100c6f86ea5a62895da-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-247\" (UID: \"31ce131a411b4100c6f86ea5a62895da\") " pod="kube-system/kube-controller-manager-ip-172-31-29-247" Jan 17 00:22:30.621098 kubelet[3360]: I0117 00:22:30.620771 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/31ce131a411b4100c6f86ea5a62895da-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-247\" (UID: \"31ce131a411b4100c6f86ea5a62895da\") " pod="kube-system/kube-controller-manager-ip-172-31-29-247" Jan 17 00:22:30.645927 kubelet[3360]: I0117 00:22:30.645533 3360 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-247" Jan 17 00:22:30.657141 kubelet[3360]: I0117 00:22:30.656977 3360 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-29-247" Jan 17 00:22:30.657141 kubelet[3360]: I0117 00:22:30.657141 3360 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-29-247" Jan 17 00:22:31.383183 kubelet[3360]: I0117 00:22:31.383139 3360 apiserver.go:52] "Watching apiserver" Jan 17 00:22:31.415005 kubelet[3360]: I0117 00:22:31.414937 3360 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:22:31.478072 kubelet[3360]: I0117 00:22:31.476809 3360 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-247" Jan 17 00:22:31.486455 kubelet[3360]: E0117 00:22:31.486346 3360 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-29-247\" already exists" pod="kube-system/kube-scheduler-ip-172-31-29-247" Jan 17 00:22:31.513960 kubelet[3360]: I0117 00:22:31.512921 3360 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-247" podStartSLOduration=1.512860163 podStartE2EDuration="1.512860163s" podCreationTimestamp="2026-01-17 00:22:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:22:31.511408854 +0000 UTC m=+1.260048980" watchObservedRunningTime="2026-01-17 00:22:31.512860163 +0000 UTC m=+1.261500279" Jan 17 00:22:31.538082 kubelet[3360]: I0117 00:22:31.536582 3360 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-247" podStartSLOduration=3.536566051 podStartE2EDuration="3.536566051s" podCreationTimestamp="2026-01-17 00:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:22:31.52474664 +0000 UTC m=+1.273386764" watchObservedRunningTime="2026-01-17 00:22:31.536566051 +0000 UTC m=+1.285206172" Jan 17 00:22:31.548362 kubelet[3360]: I0117 00:22:31.548301 3360 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-247" podStartSLOduration=1.5482846860000001 podStartE2EDuration="1.548284686s" podCreationTimestamp="2026-01-17 00:22:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:22:31.536796018 +0000 UTC m=+1.285436142" watchObservedRunningTime="2026-01-17 00:22:31.548284686 +0000 UTC m=+1.296924805" Jan 17 00:22:34.294302 kubelet[3360]: I0117 00:22:34.294210 3360 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:22:34.295500 containerd[2105]: time="2026-01-17T00:22:34.295451473Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:22:34.296578 kubelet[3360]: I0117 00:22:34.295708 3360 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:22:34.347201 update_engine[2082]: I20260117 00:22:34.347122 2082 update_attempter.cc:509] Updating boot flags... Jan 17 00:22:34.460099 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3416) Jan 17 00:22:34.604170 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3415) Jan 17 00:22:35.050807 kubelet[3360]: I0117 00:22:35.050748 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ffb0f34d-77f6-468d-9df3-47d7bd4ab97e-kube-proxy\") pod \"kube-proxy-tpjf7\" (UID: \"ffb0f34d-77f6-468d-9df3-47d7bd4ab97e\") " pod="kube-system/kube-proxy-tpjf7" Jan 17 00:22:35.050807 kubelet[3360]: I0117 00:22:35.050801 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffb0f34d-77f6-468d-9df3-47d7bd4ab97e-lib-modules\") pod \"kube-proxy-tpjf7\" (UID: \"ffb0f34d-77f6-468d-9df3-47d7bd4ab97e\") " pod="kube-system/kube-proxy-tpjf7" Jan 17 00:22:35.050984 kubelet[3360]: I0117 00:22:35.050820 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ffb0f34d-77f6-468d-9df3-47d7bd4ab97e-xtables-lock\") pod \"kube-proxy-tpjf7\" (UID: \"ffb0f34d-77f6-468d-9df3-47d7bd4ab97e\") " pod="kube-system/kube-proxy-tpjf7" Jan 17 00:22:35.050984 kubelet[3360]: I0117 00:22:35.050836 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz8cd\" (UniqueName: \"kubernetes.io/projected/ffb0f34d-77f6-468d-9df3-47d7bd4ab97e-kube-api-access-zz8cd\") pod \"kube-proxy-tpjf7\" (UID: \"ffb0f34d-77f6-468d-9df3-47d7bd4ab97e\") " pod="kube-system/kube-proxy-tpjf7" Jan 17 00:22:35.159649 kubelet[3360]: E0117 00:22:35.159599 3360 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 17 00:22:35.159649 kubelet[3360]: E0117 00:22:35.159660 3360 projected.go:194] Error preparing data for projected volume kube-api-access-zz8cd for pod kube-system/kube-proxy-tpjf7: configmap "kube-root-ca.crt" not found Jan 17 00:22:35.159854 kubelet[3360]: E0117 00:22:35.159738 3360 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ffb0f34d-77f6-468d-9df3-47d7bd4ab97e-kube-api-access-zz8cd podName:ffb0f34d-77f6-468d-9df3-47d7bd4ab97e nodeName:}" failed. No retries permitted until 2026-01-17 00:22:35.65971292 +0000 UTC m=+5.408353040 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zz8cd" (UniqueName: "kubernetes.io/projected/ffb0f34d-77f6-468d-9df3-47d7bd4ab97e-kube-api-access-zz8cd") pod "kube-proxy-tpjf7" (UID: "ffb0f34d-77f6-468d-9df3-47d7bd4ab97e") : configmap "kube-root-ca.crt" not found Jan 17 00:22:35.454587 kubelet[3360]: I0117 00:22:35.454367 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1854e129-68c6-4b3b-a731-b26abb9a1bf9-var-lib-calico\") pod \"tigera-operator-7dcd859c48-v6s55\" (UID: \"1854e129-68c6-4b3b-a731-b26abb9a1bf9\") " pod="tigera-operator/tigera-operator-7dcd859c48-v6s55" Jan 17 00:22:35.454587 kubelet[3360]: I0117 00:22:35.454439 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td7jl\" (UniqueName: \"kubernetes.io/projected/1854e129-68c6-4b3b-a731-b26abb9a1bf9-kube-api-access-td7jl\") pod \"tigera-operator-7dcd859c48-v6s55\" (UID: \"1854e129-68c6-4b3b-a731-b26abb9a1bf9\") " pod="tigera-operator/tigera-operator-7dcd859c48-v6s55" Jan 17 00:22:35.704455 containerd[2105]: time="2026-01-17T00:22:35.704405420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-v6s55,Uid:1854e129-68c6-4b3b-a731-b26abb9a1bf9,Namespace:tigera-operator,Attempt:0,}" Jan 17 00:22:35.736721 containerd[2105]: time="2026-01-17T00:22:35.736210516Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:22:35.736721 containerd[2105]: time="2026-01-17T00:22:35.736312483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:22:35.736721 containerd[2105]: time="2026-01-17T00:22:35.736342134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:35.736721 containerd[2105]: time="2026-01-17T00:22:35.736458646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:35.766702 systemd[1]: run-containerd-runc-k8s.io-6767d4553e94249905bcce7bbda06806b7b68fd87650dd49495b062bcc969341-runc.5gLYtW.mount: Deactivated successfully. Jan 17 00:22:35.819223 containerd[2105]: time="2026-01-17T00:22:35.819182295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-v6s55,Uid:1854e129-68c6-4b3b-a731-b26abb9a1bf9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6767d4553e94249905bcce7bbda06806b7b68fd87650dd49495b062bcc969341\"" Jan 17 00:22:35.821898 containerd[2105]: time="2026-01-17T00:22:35.821833830Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 17 00:22:35.929933 containerd[2105]: time="2026-01-17T00:22:35.929890258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tpjf7,Uid:ffb0f34d-77f6-468d-9df3-47d7bd4ab97e,Namespace:kube-system,Attempt:0,}" Jan 17 00:22:35.959600 containerd[2105]: time="2026-01-17T00:22:35.959303638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:22:35.959600 containerd[2105]: time="2026-01-17T00:22:35.959375824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:22:35.959600 containerd[2105]: time="2026-01-17T00:22:35.959407249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:35.959600 containerd[2105]: time="2026-01-17T00:22:35.959525455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:36.025381 containerd[2105]: time="2026-01-17T00:22:36.025191106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tpjf7,Uid:ffb0f34d-77f6-468d-9df3-47d7bd4ab97e,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f2f90f140be2e0255f034f843f69c2a95f2af4d07197ac7116ad3be9053c8f9\"" Jan 17 00:22:36.031475 containerd[2105]: time="2026-01-17T00:22:36.031155085Z" level=info msg="CreateContainer within sandbox \"6f2f90f140be2e0255f034f843f69c2a95f2af4d07197ac7116ad3be9053c8f9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:22:36.065102 containerd[2105]: time="2026-01-17T00:22:36.064187910Z" level=info msg="CreateContainer within sandbox \"6f2f90f140be2e0255f034f843f69c2a95f2af4d07197ac7116ad3be9053c8f9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4a67a08adfa28e11159a2355cfba35abf8884c544a5c94dd6cac4e3f4d4f9adc\"" Jan 17 00:22:36.066736 containerd[2105]: time="2026-01-17T00:22:36.066673144Z" level=info msg="StartContainer for \"4a67a08adfa28e11159a2355cfba35abf8884c544a5c94dd6cac4e3f4d4f9adc\"" Jan 17 00:22:36.135035 containerd[2105]: time="2026-01-17T00:22:36.134783825Z" level=info msg="StartContainer for \"4a67a08adfa28e11159a2355cfba35abf8884c544a5c94dd6cac4e3f4d4f9adc\" returns successfully" Jan 17 00:22:36.514372 kubelet[3360]: I0117 00:22:36.514296 3360 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tpjf7" podStartSLOduration=1.5142781090000001 podStartE2EDuration="1.514278109s" podCreationTimestamp="2026-01-17 00:22:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:22:36.503758752 +0000 UTC m=+6.252398881" watchObservedRunningTime="2026-01-17 00:22:36.514278109 +0000 UTC m=+6.262918214" Jan 17 00:22:37.071908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1382980630.mount: Deactivated successfully. Jan 17 00:22:37.978969 containerd[2105]: time="2026-01-17T00:22:37.978904133Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:37.982039 containerd[2105]: time="2026-01-17T00:22:37.980930328Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 17 00:22:37.984200 containerd[2105]: time="2026-01-17T00:22:37.984151506Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:37.986889 containerd[2105]: time="2026-01-17T00:22:37.986838060Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:37.988184 containerd[2105]: time="2026-01-17T00:22:37.988138454Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.166257865s" Jan 17 00:22:37.988287 containerd[2105]: time="2026-01-17T00:22:37.988190698Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 17 00:22:37.992561 containerd[2105]: time="2026-01-17T00:22:37.992502820Z" level=info msg="CreateContainer within sandbox \"6767d4553e94249905bcce7bbda06806b7b68fd87650dd49495b062bcc969341\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 00:22:38.010646 containerd[2105]: time="2026-01-17T00:22:38.010386038Z" level=info msg="CreateContainer within sandbox \"6767d4553e94249905bcce7bbda06806b7b68fd87650dd49495b062bcc969341\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9c7b38b476cdf968c70a697dab4213e2c17b6ae2e8ea4edeaa206b9c5d5d68e6\"" Jan 17 00:22:38.012805 containerd[2105]: time="2026-01-17T00:22:38.011400037Z" level=info msg="StartContainer for \"9c7b38b476cdf968c70a697dab4213e2c17b6ae2e8ea4edeaa206b9c5d5d68e6\"" Jan 17 00:22:38.106673 containerd[2105]: time="2026-01-17T00:22:38.106591593Z" level=info msg="StartContainer for \"9c7b38b476cdf968c70a697dab4213e2c17b6ae2e8ea4edeaa206b9c5d5d68e6\" returns successfully" Jan 17 00:22:39.125610 kubelet[3360]: I0117 00:22:39.125475 3360 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-v6s55" podStartSLOduration=1.957069012 podStartE2EDuration="4.125456293s" podCreationTimestamp="2026-01-17 00:22:35 +0000 UTC" firstStartedPulling="2026-01-17 00:22:35.82091435 +0000 UTC m=+5.569554468" lastFinishedPulling="2026-01-17 00:22:37.989301642 +0000 UTC m=+7.737941749" observedRunningTime="2026-01-17 00:22:38.512440371 +0000 UTC m=+8.261080497" watchObservedRunningTime="2026-01-17 00:22:39.125456293 +0000 UTC m=+8.874096417" Jan 17 00:22:45.030765 sudo[2453]: pam_unix(sudo:session): session closed for user root Jan 17 00:22:45.118743 sshd[2449]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:45.125194 systemd[1]: sshd@6-172.31.29.247:22-4.153.228.146:52722.service: Deactivated successfully. Jan 17 00:22:45.134996 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:22:45.140688 systemd-logind[2080]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:22:45.145122 systemd-logind[2080]: Removed session 7. Jan 17 00:22:51.573785 kubelet[3360]: I0117 00:22:51.573504 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df5a1b68-3020-49bf-8f48-7e12e7440f61-tigera-ca-bundle\") pod \"calico-typha-67675fbd7c-blvlp\" (UID: \"df5a1b68-3020-49bf-8f48-7e12e7440f61\") " pod="calico-system/calico-typha-67675fbd7c-blvlp" Jan 17 00:22:51.573785 kubelet[3360]: I0117 00:22:51.573554 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6dnn\" (UniqueName: \"kubernetes.io/projected/df5a1b68-3020-49bf-8f48-7e12e7440f61-kube-api-access-t6dnn\") pod \"calico-typha-67675fbd7c-blvlp\" (UID: \"df5a1b68-3020-49bf-8f48-7e12e7440f61\") " pod="calico-system/calico-typha-67675fbd7c-blvlp" Jan 17 00:22:51.573785 kubelet[3360]: I0117 00:22:51.573584 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/df5a1b68-3020-49bf-8f48-7e12e7440f61-typha-certs\") pod \"calico-typha-67675fbd7c-blvlp\" (UID: \"df5a1b68-3020-49bf-8f48-7e12e7440f61\") " pod="calico-system/calico-typha-67675fbd7c-blvlp" Jan 17 00:22:51.775406 kubelet[3360]: I0117 00:22:51.774655 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aff5f70a-21b2-431f-b18d-f4075ad65c71-lib-modules\") pod \"calico-node-lvvlx\" (UID: \"aff5f70a-21b2-431f-b18d-f4075ad65c71\") " pod="calico-system/calico-node-lvvlx" Jan 17 00:22:51.775406 kubelet[3360]: I0117 00:22:51.774714 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aff5f70a-21b2-431f-b18d-f4075ad65c71-tigera-ca-bundle\") pod \"calico-node-lvvlx\" (UID: \"aff5f70a-21b2-431f-b18d-f4075ad65c71\") " pod="calico-system/calico-node-lvvlx" Jan 17 00:22:51.775406 kubelet[3360]: I0117 00:22:51.774741 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aff5f70a-21b2-431f-b18d-f4075ad65c71-var-lib-calico\") pod \"calico-node-lvvlx\" (UID: \"aff5f70a-21b2-431f-b18d-f4075ad65c71\") " pod="calico-system/calico-node-lvvlx" Jan 17 00:22:51.775406 kubelet[3360]: I0117 00:22:51.775089 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/aff5f70a-21b2-431f-b18d-f4075ad65c71-node-certs\") pod \"calico-node-lvvlx\" (UID: \"aff5f70a-21b2-431f-b18d-f4075ad65c71\") " pod="calico-system/calico-node-lvvlx" Jan 17 00:22:51.775406 kubelet[3360]: I0117 00:22:51.775159 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/aff5f70a-21b2-431f-b18d-f4075ad65c71-cni-net-dir\") pod \"calico-node-lvvlx\" (UID: \"aff5f70a-21b2-431f-b18d-f4075ad65c71\") " pod="calico-system/calico-node-lvvlx" Jan 17 00:22:51.775775 kubelet[3360]: I0117 00:22:51.775183 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/aff5f70a-21b2-431f-b18d-f4075ad65c71-policysync\") pod \"calico-node-lvvlx\" (UID: \"aff5f70a-21b2-431f-b18d-f4075ad65c71\") " pod="calico-system/calico-node-lvvlx" Jan 17 00:22:51.775775 kubelet[3360]: I0117 00:22:51.775237 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/aff5f70a-21b2-431f-b18d-f4075ad65c71-cni-log-dir\") pod \"calico-node-lvvlx\" (UID: \"aff5f70a-21b2-431f-b18d-f4075ad65c71\") " pod="calico-system/calico-node-lvvlx" Jan 17 00:22:51.775775 kubelet[3360]: I0117 00:22:51.775261 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n5p8\" (UniqueName: \"kubernetes.io/projected/aff5f70a-21b2-431f-b18d-f4075ad65c71-kube-api-access-9n5p8\") pod \"calico-node-lvvlx\" (UID: \"aff5f70a-21b2-431f-b18d-f4075ad65c71\") " pod="calico-system/calico-node-lvvlx" Jan 17 00:22:51.775775 kubelet[3360]: I0117 00:22:51.775349 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/aff5f70a-21b2-431f-b18d-f4075ad65c71-flexvol-driver-host\") pod \"calico-node-lvvlx\" (UID: \"aff5f70a-21b2-431f-b18d-f4075ad65c71\") " pod="calico-system/calico-node-lvvlx" Jan 17 00:22:51.775775 kubelet[3360]: I0117 00:22:51.775371 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aff5f70a-21b2-431f-b18d-f4075ad65c71-xtables-lock\") pod \"calico-node-lvvlx\" (UID: \"aff5f70a-21b2-431f-b18d-f4075ad65c71\") " pod="calico-system/calico-node-lvvlx" Jan 17 00:22:51.776169 kubelet[3360]: I0117 00:22:51.776141 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/aff5f70a-21b2-431f-b18d-f4075ad65c71-cni-bin-dir\") pod \"calico-node-lvvlx\" (UID: \"aff5f70a-21b2-431f-b18d-f4075ad65c71\") " pod="calico-system/calico-node-lvvlx" Jan 17 00:22:51.776264 kubelet[3360]: I0117 00:22:51.776223 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/aff5f70a-21b2-431f-b18d-f4075ad65c71-var-run-calico\") pod \"calico-node-lvvlx\" (UID: \"aff5f70a-21b2-431f-b18d-f4075ad65c71\") " pod="calico-system/calico-node-lvvlx" Jan 17 00:22:51.867526 kubelet[3360]: E0117 00:22:51.867387 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hbb8z" podUID="d7198563-8b4e-4b52-ad88-2f9e6d09e79c" Jan 17 00:22:51.881644 containerd[2105]: time="2026-01-17T00:22:51.881372104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67675fbd7c-blvlp,Uid:df5a1b68-3020-49bf-8f48-7e12e7440f61,Namespace:calico-system,Attempt:0,}" Jan 17 00:22:51.885266 kubelet[3360]: E0117 00:22:51.883827 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:51.885266 kubelet[3360]: W0117 00:22:51.883845 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:51.891084 kubelet[3360]: E0117 00:22:51.886317 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:51.891084 kubelet[3360]: E0117 00:22:51.888115 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:51.891084 kubelet[3360]: W0117 00:22:51.890200 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:51.891084 kubelet[3360]: E0117 00:22:51.890226 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:51.893290 kubelet[3360]: E0117 00:22:51.893268 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:51.894078 kubelet[3360]: W0117 00:22:51.893305 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:51.894078 kubelet[3360]: E0117 00:22:51.893331 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:51.894410 kubelet[3360]: E0117 00:22:51.894366 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:51.894452 kubelet[3360]: W0117 00:22:51.894411 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:51.894452 kubelet[3360]: E0117 00:22:51.894436 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:51.898061 kubelet[3360]: E0117 00:22:51.897200 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:51.898061 kubelet[3360]: W0117 00:22:51.897219 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:51.898061 kubelet[3360]: E0117 00:22:51.897237 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:51.916334 kubelet[3360]: E0117 00:22:51.916260 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:51.916334 kubelet[3360]: W0117 00:22:51.916279 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:51.916334 kubelet[3360]: E0117 00:22:51.916304 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:51.946323 containerd[2105]: time="2026-01-17T00:22:51.945745591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:22:51.946323 containerd[2105]: time="2026-01-17T00:22:51.945799440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:22:51.946323 containerd[2105]: time="2026-01-17T00:22:51.945810017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:51.946323 containerd[2105]: time="2026-01-17T00:22:51.945896177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:51.962254 kubelet[3360]: E0117 00:22:51.962218 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:51.964254 kubelet[3360]: W0117 00:22:51.964106 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:51.964254 kubelet[3360]: E0117 00:22:51.964142 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:51.965485 kubelet[3360]: E0117 00:22:51.964974 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:51.965485 kubelet[3360]: W0117 00:22:51.964989 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:51.965485 kubelet[3360]: E0117 00:22:51.965003 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:51.966913 kubelet[3360]: E0117 00:22:51.966731 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:51.966913 kubelet[3360]: W0117 00:22:51.966747 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:51.966913 kubelet[3360]: E0117 00:22:51.966762 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:51.967324 kubelet[3360]: E0117 00:22:51.967266 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:51.967324 kubelet[3360]: W0117 00:22:51.967277 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:51.967324 kubelet[3360]: E0117 00:22:51.967289 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:51.967782 kubelet[3360]: E0117 00:22:51.967675 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:51.967782 kubelet[3360]: W0117 00:22:51.967685 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:51.967782 kubelet[3360]: E0117 00:22:51.967696 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:51.968083 kubelet[3360]: E0117 00:22:51.968018 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:51.968083 kubelet[3360]: W0117 00:22:51.968027 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:51.968083 kubelet[3360]: E0117 00:22:51.968037 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:51.968645 kubelet[3360]: E0117 00:22:51.968375 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:51.968645 kubelet[3360]: W0117 00:22:51.968384 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:51.968645 kubelet[3360]: E0117 00:22:51.968393 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:51.970027 kubelet[3360]: E0117 00:22:51.969504 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:51.970027 kubelet[3360]: W0117 00:22:51.969514 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:51.970027 kubelet[3360]: E0117 00:22:51.969527 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:51.971352 kubelet[3360]: E0117 00:22:51.971037 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:51.971352 kubelet[3360]: W0117 00:22:51.971167 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:51.971352 kubelet[3360]: E0117 00:22:51.971216 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:51.972749 kubelet[3360]: E0117 00:22:51.972393 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:51.972749 kubelet[3360]: W0117 00:22:51.972405 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:51.972749 kubelet[3360]: E0117 00:22:51.972417 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:51.974476 kubelet[3360]: E0117 00:22:51.973969 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:51.974476 kubelet[3360]: W0117 00:22:51.973981 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:51.974476 kubelet[3360]: E0117 00:22:51.973993 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:51.975599 kubelet[3360]: E0117 00:22:51.975521 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:51.975599 kubelet[3360]: W0117 00:22:51.975533 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:51.975599 kubelet[3360]: E0117 00:22:51.975544 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:51.975957 kubelet[3360]: E0117 00:22:51.975877 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:51.975957 kubelet[3360]: W0117 00:22:51.975886 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:51.975957 kubelet[3360]: E0117 00:22:51.975897 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:51.977660 kubelet[3360]: E0117 00:22:51.977535 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:51.977660 kubelet[3360]: W0117 00:22:51.977547 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:51.977660 kubelet[3360]: E0117 00:22:51.977559 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:51.983076 kubelet[3360]: E0117 00:22:51.981969 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:51.983076 kubelet[3360]: W0117 00:22:51.981992 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:51.983076 kubelet[3360]: E0117 00:22:51.982014 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:51.985800 kubelet[3360]: E0117 00:22:51.984966 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:51.985800 kubelet[3360]: W0117 00:22:51.984989 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:51.985800 kubelet[3360]: E0117 00:22:51.985012 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:51.985800 kubelet[3360]: E0117 00:22:51.985438 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:51.985800 kubelet[3360]: W0117 00:22:51.985452 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:51.985800 kubelet[3360]: E0117 00:22:51.985489 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:51.985800 kubelet[3360]: E0117 00:22:51.985762 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:51.985800 kubelet[3360]: W0117 00:22:51.985773 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:51.985800 kubelet[3360]: E0117 00:22:51.985814 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:51.986576 kubelet[3360]: E0117 00:22:51.986254 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:51.986576 kubelet[3360]: W0117 00:22:51.986268 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:51.986576 kubelet[3360]: E0117 00:22:51.986283 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:51.986576 kubelet[3360]: E0117 00:22:51.986573 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:51.998687 kubelet[3360]: W0117 00:22:51.986584 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:51.998687 kubelet[3360]: E0117 00:22:51.986597 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:51.998687 kubelet[3360]: E0117 00:22:51.986966 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:51.998687 kubelet[3360]: W0117 00:22:51.986977 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:51.998687 kubelet[3360]: E0117 00:22:51.986990 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:51.998687 kubelet[3360]: I0117 00:22:51.987021 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7198563-8b4e-4b52-ad88-2f9e6d09e79c-kubelet-dir\") pod \"csi-node-driver-hbb8z\" (UID: \"d7198563-8b4e-4b52-ad88-2f9e6d09e79c\") " pod="calico-system/csi-node-driver-hbb8z" Jan 17 00:22:51.998687 kubelet[3360]: E0117 00:22:51.987289 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:51.998687 kubelet[3360]: W0117 00:22:51.987304 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:51.998687 kubelet[3360]: E0117 00:22:51.987318 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.005901 kubelet[3360]: I0117 00:22:51.987342 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d7198563-8b4e-4b52-ad88-2f9e6d09e79c-varrun\") pod \"csi-node-driver-hbb8z\" (UID: \"d7198563-8b4e-4b52-ad88-2f9e6d09e79c\") " pod="calico-system/csi-node-driver-hbb8z" Jan 17 00:22:52.005901 kubelet[3360]: E0117 00:22:51.990391 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.005901 kubelet[3360]: W0117 00:22:51.990409 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.005901 kubelet[3360]: E0117 00:22:51.990434 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.005901 kubelet[3360]: I0117 00:22:51.990509 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d7198563-8b4e-4b52-ad88-2f9e6d09e79c-registration-dir\") pod \"csi-node-driver-hbb8z\" (UID: \"d7198563-8b4e-4b52-ad88-2f9e6d09e79c\") " pod="calico-system/csi-node-driver-hbb8z" Jan 17 00:22:52.005901 kubelet[3360]: E0117 00:22:51.997155 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.005901 kubelet[3360]: W0117 00:22:51.997177 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.005901 kubelet[3360]: E0117 00:22:51.997210 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.005901 kubelet[3360]: E0117 00:22:51.998174 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.009080 kubelet[3360]: W0117 00:22:51.998196 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.009080 kubelet[3360]: E0117 00:22:51.998286 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.009080 kubelet[3360]: E0117 00:22:52.000297 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.009080 kubelet[3360]: W0117 00:22:52.000313 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.009080 kubelet[3360]: E0117 00:22:52.001146 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.009080 kubelet[3360]: E0117 00:22:52.002167 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.009080 kubelet[3360]: W0117 00:22:52.002184 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.009080 kubelet[3360]: E0117 00:22:52.002284 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.009080 kubelet[3360]: I0117 00:22:52.002324 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d7198563-8b4e-4b52-ad88-2f9e6d09e79c-socket-dir\") pod \"csi-node-driver-hbb8z\" (UID: \"d7198563-8b4e-4b52-ad88-2f9e6d09e79c\") " pod="calico-system/csi-node-driver-hbb8z" Jan 17 00:22:52.009449 kubelet[3360]: E0117 00:22:52.004584 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.009449 kubelet[3360]: W0117 00:22:52.004602 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.009449 kubelet[3360]: E0117 00:22:52.004842 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.009449 kubelet[3360]: E0117 00:22:52.007740 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.009449 kubelet[3360]: W0117 00:22:52.007757 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.009449 kubelet[3360]: E0117 00:22:52.007780 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.015237 kubelet[3360]: E0117 00:22:52.013181 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.015237 kubelet[3360]: W0117 00:22:52.013209 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.015237 kubelet[3360]: E0117 00:22:52.013237 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.015237 kubelet[3360]: I0117 00:22:52.013276 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlttd\" (UniqueName: \"kubernetes.io/projected/d7198563-8b4e-4b52-ad88-2f9e6d09e79c-kube-api-access-tlttd\") pod \"csi-node-driver-hbb8z\" (UID: \"d7198563-8b4e-4b52-ad88-2f9e6d09e79c\") " pod="calico-system/csi-node-driver-hbb8z" Jan 17 00:22:52.015237 kubelet[3360]: E0117 00:22:52.013793 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.015237 kubelet[3360]: W0117 00:22:52.013812 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.015237 kubelet[3360]: E0117 00:22:52.013834 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.015935 kubelet[3360]: E0117 00:22:52.015875 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.015935 kubelet[3360]: W0117 00:22:52.015896 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.015935 kubelet[3360]: E0117 00:22:52.015932 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.016389 kubelet[3360]: E0117 00:22:52.016362 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.016389 kubelet[3360]: W0117 00:22:52.016386 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.016496 kubelet[3360]: E0117 00:22:52.016403 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.016810 kubelet[3360]: E0117 00:22:52.016761 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.016810 kubelet[3360]: W0117 00:22:52.016780 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.016810 kubelet[3360]: E0117 00:22:52.016794 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.017318 kubelet[3360]: E0117 00:22:52.017244 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.017318 kubelet[3360]: W0117 00:22:52.017260 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.017318 kubelet[3360]: E0117 00:22:52.017287 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.044929 containerd[2105]: time="2026-01-17T00:22:52.044875970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lvvlx,Uid:aff5f70a-21b2-431f-b18d-f4075ad65c71,Namespace:calico-system,Attempt:0,}" Jan 17 00:22:52.075750 containerd[2105]: time="2026-01-17T00:22:52.075700393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67675fbd7c-blvlp,Uid:df5a1b68-3020-49bf-8f48-7e12e7440f61,Namespace:calico-system,Attempt:0,} returns sandbox id \"5da5c85fb1d099f3aac2e7e0eeb94fa5044123734b5ad19d3eeb95237de8eb63\"" Jan 17 00:22:52.085994 containerd[2105]: time="2026-01-17T00:22:52.085954243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 17 00:22:52.096154 containerd[2105]: time="2026-01-17T00:22:52.095957243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:22:52.096321 containerd[2105]: time="2026-01-17T00:22:52.096228983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:22:52.096321 containerd[2105]: time="2026-01-17T00:22:52.096295922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:52.097177 containerd[2105]: time="2026-01-17T00:22:52.097116452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:52.124373 kubelet[3360]: E0117 00:22:52.123336 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.124373 kubelet[3360]: W0117 00:22:52.123369 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.124373 kubelet[3360]: E0117 00:22:52.123412 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.124373 kubelet[3360]: E0117 00:22:52.123731 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.124373 kubelet[3360]: W0117 00:22:52.123743 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.124373 kubelet[3360]: E0117 00:22:52.123758 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.124373 kubelet[3360]: E0117 00:22:52.123993 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.124373 kubelet[3360]: W0117 00:22:52.124004 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.124373 kubelet[3360]: E0117 00:22:52.124022 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.124373 kubelet[3360]: E0117 00:22:52.124271 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.124936 kubelet[3360]: W0117 00:22:52.124283 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.124936 kubelet[3360]: E0117 00:22:52.124297 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.124936 kubelet[3360]: E0117 00:22:52.124825 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.124936 kubelet[3360]: W0117 00:22:52.124858 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.124936 kubelet[3360]: E0117 00:22:52.124885 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.133278 kubelet[3360]: E0117 00:22:52.133246 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.133278 kubelet[3360]: W0117 00:22:52.133275 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.133509 kubelet[3360]: E0117 00:22:52.133316 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.134075 kubelet[3360]: E0117 00:22:52.133561 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.134075 kubelet[3360]: W0117 00:22:52.133574 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.134075 kubelet[3360]: E0117 00:22:52.133595 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.134075 kubelet[3360]: E0117 00:22:52.133829 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.134075 kubelet[3360]: W0117 00:22:52.133840 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.134075 kubelet[3360]: E0117 00:22:52.133861 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.136765 kubelet[3360]: E0117 00:22:52.134212 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.136765 kubelet[3360]: W0117 00:22:52.134233 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.136765 kubelet[3360]: E0117 00:22:52.134271 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.136765 kubelet[3360]: E0117 00:22:52.134625 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.136765 kubelet[3360]: W0117 00:22:52.134636 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.136765 kubelet[3360]: E0117 00:22:52.134671 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.136765 kubelet[3360]: E0117 00:22:52.135089 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.136765 kubelet[3360]: W0117 00:22:52.135102 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.136765 kubelet[3360]: E0117 00:22:52.135116 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.136765 kubelet[3360]: E0117 00:22:52.135376 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.141995 kubelet[3360]: W0117 00:22:52.135387 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.141995 kubelet[3360]: E0117 00:22:52.135408 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.141995 kubelet[3360]: E0117 00:22:52.136632 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.141995 kubelet[3360]: W0117 00:22:52.136644 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.141995 kubelet[3360]: E0117 00:22:52.136659 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.141995 kubelet[3360]: E0117 00:22:52.136912 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.141995 kubelet[3360]: W0117 00:22:52.136922 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.141995 kubelet[3360]: E0117 00:22:52.136942 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.141995 kubelet[3360]: E0117 00:22:52.137198 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.141995 kubelet[3360]: W0117 00:22:52.137235 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.145580 kubelet[3360]: E0117 00:22:52.137257 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.145580 kubelet[3360]: E0117 00:22:52.137546 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.145580 kubelet[3360]: W0117 00:22:52.137557 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.145580 kubelet[3360]: E0117 00:22:52.137579 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.145580 kubelet[3360]: E0117 00:22:52.138006 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.145580 kubelet[3360]: W0117 00:22:52.138025 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.145580 kubelet[3360]: E0117 00:22:52.138040 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.145580 kubelet[3360]: E0117 00:22:52.138617 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.145580 kubelet[3360]: W0117 00:22:52.138629 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.145580 kubelet[3360]: E0117 00:22:52.138651 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.145995 kubelet[3360]: E0117 00:22:52.138855 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.145995 kubelet[3360]: W0117 00:22:52.138865 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.145995 kubelet[3360]: E0117 00:22:52.138882 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.145995 kubelet[3360]: E0117 00:22:52.139091 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.145995 kubelet[3360]: W0117 00:22:52.139103 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.145995 kubelet[3360]: E0117 00:22:52.139122 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.145995 kubelet[3360]: E0117 00:22:52.139417 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.145995 kubelet[3360]: W0117 00:22:52.139433 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.145995 kubelet[3360]: E0117 00:22:52.139445 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.145995 kubelet[3360]: E0117 00:22:52.140117 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.156898 kubelet[3360]: W0117 00:22:52.140130 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.156898 kubelet[3360]: E0117 00:22:52.140144 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.156898 kubelet[3360]: E0117 00:22:52.140450 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.156898 kubelet[3360]: W0117 00:22:52.140461 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.156898 kubelet[3360]: E0117 00:22:52.140482 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.156898 kubelet[3360]: E0117 00:22:52.140738 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.156898 kubelet[3360]: W0117 00:22:52.140752 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.156898 kubelet[3360]: E0117 00:22:52.140850 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.156898 kubelet[3360]: E0117 00:22:52.141288 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.156898 kubelet[3360]: W0117 00:22:52.141300 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.157335 kubelet[3360]: E0117 00:22:52.141314 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.180084 kubelet[3360]: E0117 00:22:52.179633 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:52.180084 kubelet[3360]: W0117 00:22:52.179665 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:52.181220 kubelet[3360]: E0117 00:22:52.180104 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:52.189542 containerd[2105]: time="2026-01-17T00:22:52.189501322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lvvlx,Uid:aff5f70a-21b2-431f-b18d-f4075ad65c71,Namespace:calico-system,Attempt:0,} returns sandbox id \"9fc95aab26127374466793dfb289698a08ed719f3c820e8049f4393a974b2e90\"" Jan 17 00:22:53.406380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2484094660.mount: Deactivated successfully. Jan 17 00:22:53.478826 kubelet[3360]: E0117 00:22:53.476171 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hbb8z" podUID="d7198563-8b4e-4b52-ad88-2f9e6d09e79c" Jan 17 00:22:54.410114 containerd[2105]: time="2026-01-17T00:22:54.409288317Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:54.411275 containerd[2105]: time="2026-01-17T00:22:54.411186547Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 17 00:22:54.412817 containerd[2105]: time="2026-01-17T00:22:54.412401238Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:54.414913 containerd[2105]: time="2026-01-17T00:22:54.414855929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:54.415779 containerd[2105]: time="2026-01-17T00:22:54.415744121Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.329745964s" Jan 17 00:22:54.415872 containerd[2105]: time="2026-01-17T00:22:54.415781959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 17 00:22:54.419340 containerd[2105]: time="2026-01-17T00:22:54.419301805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 17 00:22:54.450899 containerd[2105]: time="2026-01-17T00:22:54.450766324Z" level=info msg="CreateContainer within sandbox \"5da5c85fb1d099f3aac2e7e0eeb94fa5044123734b5ad19d3eeb95237de8eb63\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 00:22:54.467333 containerd[2105]: time="2026-01-17T00:22:54.467287439Z" level=info msg="CreateContainer within sandbox \"5da5c85fb1d099f3aac2e7e0eeb94fa5044123734b5ad19d3eeb95237de8eb63\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fb6608db79c1112aca8092ef8e0e98dda7d18ec96a57414c2a8d9d8d027ac980\"" Jan 17 00:22:54.468171 containerd[2105]: time="2026-01-17T00:22:54.468125886Z" level=info msg="StartContainer for \"fb6608db79c1112aca8092ef8e0e98dda7d18ec96a57414c2a8d9d8d027ac980\"" Jan 17 00:22:54.591451 containerd[2105]: time="2026-01-17T00:22:54.591351886Z" level=info msg="StartContainer for \"fb6608db79c1112aca8092ef8e0e98dda7d18ec96a57414c2a8d9d8d027ac980\" returns successfully" Jan 17 00:22:54.699620 kubelet[3360]: I0117 00:22:54.699423 3360 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-67675fbd7c-blvlp" podStartSLOduration=1.3647164649999999 podStartE2EDuration="3.698169347s" podCreationTimestamp="2026-01-17 00:22:51 +0000 UTC" firstStartedPulling="2026-01-17 00:22:52.085528177 +0000 UTC m=+21.834168295" lastFinishedPulling="2026-01-17 00:22:54.418981061 +0000 UTC m=+24.167621177" observedRunningTime="2026-01-17 00:22:54.697853747 +0000 UTC m=+24.446493872" watchObservedRunningTime="2026-01-17 00:22:54.698169347 +0000 UTC m=+24.446809469" Jan 17 00:22:54.723512 kubelet[3360]: E0117 00:22:54.723370 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.723512 kubelet[3360]: W0117 00:22:54.723394 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.728138 kubelet[3360]: E0117 00:22:54.727984 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:54.728526 kubelet[3360]: E0117 00:22:54.728509 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.729300 kubelet[3360]: W0117 00:22:54.728579 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.729300 kubelet[3360]: E0117 00:22:54.728605 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:54.729300 kubelet[3360]: E0117 00:22:54.729113 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.729300 kubelet[3360]: W0117 00:22:54.729124 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.729300 kubelet[3360]: E0117 00:22:54.729137 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:54.731254 kubelet[3360]: E0117 00:22:54.731127 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.731254 kubelet[3360]: W0117 00:22:54.731141 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.731254 kubelet[3360]: E0117 00:22:54.731155 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:54.731552 kubelet[3360]: E0117 00:22:54.731404 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.731552 kubelet[3360]: W0117 00:22:54.731412 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.731552 kubelet[3360]: E0117 00:22:54.731421 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:54.731859 kubelet[3360]: E0117 00:22:54.731781 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.731859 kubelet[3360]: W0117 00:22:54.731790 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.731859 kubelet[3360]: E0117 00:22:54.731799 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:54.732382 kubelet[3360]: E0117 00:22:54.732040 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.732382 kubelet[3360]: W0117 00:22:54.732330 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.732382 kubelet[3360]: E0117 00:22:54.732342 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:54.732876 kubelet[3360]: E0117 00:22:54.732759 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.732876 kubelet[3360]: W0117 00:22:54.732782 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.732876 kubelet[3360]: E0117 00:22:54.732792 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:54.733198 kubelet[3360]: E0117 00:22:54.733114 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.733198 kubelet[3360]: W0117 00:22:54.733125 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.733198 kubelet[3360]: E0117 00:22:54.733134 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:54.733491 kubelet[3360]: E0117 00:22:54.733439 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.733491 kubelet[3360]: W0117 00:22:54.733448 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.733491 kubelet[3360]: E0117 00:22:54.733458 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:54.733801 kubelet[3360]: E0117 00:22:54.733793 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.733901 kubelet[3360]: W0117 00:22:54.733850 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.733901 kubelet[3360]: E0117 00:22:54.733862 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:54.735136 kubelet[3360]: E0117 00:22:54.734494 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.735136 kubelet[3360]: W0117 00:22:54.734505 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.735136 kubelet[3360]: E0117 00:22:54.734515 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:54.735546 kubelet[3360]: E0117 00:22:54.735524 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.735853 kubelet[3360]: W0117 00:22:54.735839 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.735932 kubelet[3360]: E0117 00:22:54.735923 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:54.736258 kubelet[3360]: E0117 00:22:54.736188 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.736258 kubelet[3360]: W0117 00:22:54.736198 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.736258 kubelet[3360]: E0117 00:22:54.736208 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:54.736600 kubelet[3360]: E0117 00:22:54.736529 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.736600 kubelet[3360]: W0117 00:22:54.736538 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.736600 kubelet[3360]: E0117 00:22:54.736549 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:54.745295 kubelet[3360]: E0117 00:22:54.745188 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.745295 kubelet[3360]: W0117 00:22:54.745229 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.745295 kubelet[3360]: E0117 00:22:54.745257 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:54.748285 kubelet[3360]: E0117 00:22:54.746124 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.748285 kubelet[3360]: W0117 00:22:54.746145 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.748285 kubelet[3360]: E0117 00:22:54.746565 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:54.751101 kubelet[3360]: E0117 00:22:54.750091 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.751101 kubelet[3360]: W0117 00:22:54.750121 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.751101 kubelet[3360]: E0117 00:22:54.750169 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:54.752517 kubelet[3360]: E0117 00:22:54.752282 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.752517 kubelet[3360]: W0117 00:22:54.752307 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.752517 kubelet[3360]: E0117 00:22:54.752404 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:54.752785 kubelet[3360]: E0117 00:22:54.752772 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.753177 kubelet[3360]: W0117 00:22:54.753081 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.755421 kubelet[3360]: E0117 00:22:54.755084 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:54.755672 kubelet[3360]: E0117 00:22:54.755658 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.758089 kubelet[3360]: W0117 00:22:54.755737 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.758724 kubelet[3360]: E0117 00:22:54.758512 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:54.758724 kubelet[3360]: E0117 00:22:54.758621 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.759186 kubelet[3360]: W0117 00:22:54.758871 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.759186 kubelet[3360]: E0117 00:22:54.759128 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:54.759963 kubelet[3360]: E0117 00:22:54.759778 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.759963 kubelet[3360]: W0117 00:22:54.759795 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.760697 kubelet[3360]: E0117 00:22:54.760428 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.760697 kubelet[3360]: W0117 00:22:54.760443 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.760697 kubelet[3360]: E0117 00:22:54.760459 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:54.768606 kubelet[3360]: E0117 00:22:54.768141 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:54.768606 kubelet[3360]: E0117 00:22:54.768256 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.768606 kubelet[3360]: W0117 00:22:54.768270 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.768606 kubelet[3360]: E0117 00:22:54.768302 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:54.770077 kubelet[3360]: E0117 00:22:54.769374 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.770077 kubelet[3360]: W0117 00:22:54.769393 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.770077 kubelet[3360]: E0117 00:22:54.769428 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:54.772573 kubelet[3360]: E0117 00:22:54.772451 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.772573 kubelet[3360]: W0117 00:22:54.772469 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.773675 kubelet[3360]: E0117 00:22:54.773382 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:54.773675 kubelet[3360]: E0117 00:22:54.773633 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.773675 kubelet[3360]: W0117 00:22:54.773645 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.776479 kubelet[3360]: E0117 00:22:54.776139 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:54.778224 kubelet[3360]: E0117 00:22:54.777307 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.778224 kubelet[3360]: W0117 00:22:54.777321 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.778224 kubelet[3360]: E0117 00:22:54.777343 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:54.780517 kubelet[3360]: E0117 00:22:54.780296 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.780517 kubelet[3360]: W0117 00:22:54.780329 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.780925 kubelet[3360]: E0117 00:22:54.780837 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:54.785314 kubelet[3360]: E0117 00:22:54.784900 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.785314 kubelet[3360]: W0117 00:22:54.784920 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.785314 kubelet[3360]: E0117 00:22:54.784945 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:54.791071 kubelet[3360]: E0117 00:22:54.789969 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.791071 kubelet[3360]: W0117 00:22:54.789992 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.791071 kubelet[3360]: E0117 00:22:54.790016 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:54.798487 kubelet[3360]: E0117 00:22:54.798379 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:54.798487 kubelet[3360]: W0117 00:22:54.798407 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:54.798487 kubelet[3360]: E0117 00:22:54.798442 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.102338 systemd-resolved[1998]: Under memory pressure, flushing caches. Jan 17 00:22:55.102406 systemd-resolved[1998]: Flushed all caches. Jan 17 00:22:55.104259 systemd-journald[1577]: Under memory pressure, flushing caches. Jan 17 00:22:55.431965 kubelet[3360]: E0117 00:22:55.431817 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hbb8z" podUID="d7198563-8b4e-4b52-ad88-2f9e6d09e79c" Jan 17 00:22:55.649336 containerd[2105]: time="2026-01-17T00:22:55.649273119Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:55.651762 containerd[2105]: time="2026-01-17T00:22:55.651302896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 17 00:22:55.652211 kubelet[3360]: I0117 00:22:55.652178 3360 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:22:55.654307 containerd[2105]: time="2026-01-17T00:22:55.654005943Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:55.657630 containerd[2105]: time="2026-01-17T00:22:55.657582651Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:55.662410 containerd[2105]: time="2026-01-17T00:22:55.658518703Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.239173898s" Jan 17 00:22:55.662410 containerd[2105]: time="2026-01-17T00:22:55.658546047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 17 00:22:55.665957 containerd[2105]: time="2026-01-17T00:22:55.665917347Z" level=info msg="CreateContainer within sandbox \"9fc95aab26127374466793dfb289698a08ed719f3c820e8049f4393a974b2e90\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 00:22:55.688282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3534292925.mount: Deactivated successfully. Jan 17 00:22:55.693966 containerd[2105]: time="2026-01-17T00:22:55.693914415Z" level=info msg="CreateContainer within sandbox \"9fc95aab26127374466793dfb289698a08ed719f3c820e8049f4393a974b2e90\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6a008aa5db4a48b23d871a6e6e667471929bf5158d72bce12d909f5ab4db1773\"" Jan 17 00:22:55.695546 containerd[2105]: time="2026-01-17T00:22:55.694905098Z" level=info msg="StartContainer for \"6a008aa5db4a48b23d871a6e6e667471929bf5158d72bce12d909f5ab4db1773\"" Jan 17 00:22:55.742766 kubelet[3360]: E0117 00:22:55.742729 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.742766 kubelet[3360]: W0117 00:22:55.742758 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.743801 kubelet[3360]: E0117 00:22:55.742787 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.743801 kubelet[3360]: E0117 00:22:55.743077 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.743801 kubelet[3360]: W0117 00:22:55.743093 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.743801 kubelet[3360]: E0117 00:22:55.743112 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.743801 kubelet[3360]: E0117 00:22:55.743382 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.743801 kubelet[3360]: W0117 00:22:55.743393 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.743801 kubelet[3360]: E0117 00:22:55.743406 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.743801 kubelet[3360]: E0117 00:22:55.743648 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.743801 kubelet[3360]: W0117 00:22:55.743659 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.743801 kubelet[3360]: E0117 00:22:55.743672 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.744761 kubelet[3360]: E0117 00:22:55.743917 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.744761 kubelet[3360]: W0117 00:22:55.743927 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.744761 kubelet[3360]: E0117 00:22:55.743939 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.744761 kubelet[3360]: E0117 00:22:55.744173 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.744761 kubelet[3360]: W0117 00:22:55.744185 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.744761 kubelet[3360]: E0117 00:22:55.744200 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.744761 kubelet[3360]: E0117 00:22:55.744430 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.744761 kubelet[3360]: W0117 00:22:55.744441 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.744761 kubelet[3360]: E0117 00:22:55.744453 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.744761 kubelet[3360]: E0117 00:22:55.744676 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.745423 kubelet[3360]: W0117 00:22:55.744687 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.745423 kubelet[3360]: E0117 00:22:55.744698 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.745423 kubelet[3360]: E0117 00:22:55.744912 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.745423 kubelet[3360]: W0117 00:22:55.744922 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.745423 kubelet[3360]: E0117 00:22:55.744933 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.745423 kubelet[3360]: E0117 00:22:55.745159 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.745423 kubelet[3360]: W0117 00:22:55.745170 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.745423 kubelet[3360]: E0117 00:22:55.745182 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.745423 kubelet[3360]: E0117 00:22:55.745394 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.745423 kubelet[3360]: W0117 00:22:55.745405 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.746335 kubelet[3360]: E0117 00:22:55.745416 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.747421 kubelet[3360]: E0117 00:22:55.747392 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.747421 kubelet[3360]: W0117 00:22:55.747411 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.747560 kubelet[3360]: E0117 00:22:55.747425 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.747699 kubelet[3360]: E0117 00:22:55.747673 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.747699 kubelet[3360]: W0117 00:22:55.747690 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.747813 kubelet[3360]: E0117 00:22:55.747703 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.747951 kubelet[3360]: E0117 00:22:55.747927 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.747951 kubelet[3360]: W0117 00:22:55.747944 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.748119 kubelet[3360]: E0117 00:22:55.747956 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.748210 kubelet[3360]: E0117 00:22:55.748184 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.748210 kubelet[3360]: W0117 00:22:55.748194 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.748210 kubelet[3360]: E0117 00:22:55.748206 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.763849 kubelet[3360]: E0117 00:22:55.763811 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.763849 kubelet[3360]: W0117 00:22:55.763840 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.763849 kubelet[3360]: E0117 00:22:55.763861 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.765357 kubelet[3360]: E0117 00:22:55.765330 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.765357 kubelet[3360]: W0117 00:22:55.765353 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.765503 kubelet[3360]: E0117 00:22:55.765374 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.765915 kubelet[3360]: E0117 00:22:55.765699 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.765915 kubelet[3360]: W0117 00:22:55.765712 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.765915 kubelet[3360]: E0117 00:22:55.765726 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.767485 kubelet[3360]: E0117 00:22:55.767462 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.767485 kubelet[3360]: W0117 00:22:55.767483 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.767755 kubelet[3360]: E0117 00:22:55.767509 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.768538 kubelet[3360]: E0117 00:22:55.768520 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.768615 kubelet[3360]: W0117 00:22:55.768537 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.768799 kubelet[3360]: E0117 00:22:55.768674 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.768855 kubelet[3360]: E0117 00:22:55.768826 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.768855 kubelet[3360]: W0117 00:22:55.768837 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.768943 kubelet[3360]: E0117 00:22:55.768933 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.769189 kubelet[3360]: E0117 00:22:55.769163 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.769189 kubelet[3360]: W0117 00:22:55.769175 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.769320 kubelet[3360]: E0117 00:22:55.769292 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.770434 kubelet[3360]: E0117 00:22:55.770418 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.770434 kubelet[3360]: W0117 00:22:55.770433 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.770706 kubelet[3360]: E0117 00:22:55.770455 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.770798 kubelet[3360]: E0117 00:22:55.770763 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.770798 kubelet[3360]: W0117 00:22:55.770774 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.770938 kubelet[3360]: E0117 00:22:55.770856 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.771331 kubelet[3360]: E0117 00:22:55.771302 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.771331 kubelet[3360]: W0117 00:22:55.771319 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.771463 kubelet[3360]: E0117 00:22:55.771448 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.771662 kubelet[3360]: E0117 00:22:55.771644 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.771662 kubelet[3360]: W0117 00:22:55.771661 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.772084 kubelet[3360]: E0117 00:22:55.771856 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.772265 kubelet[3360]: E0117 00:22:55.772130 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.772265 kubelet[3360]: W0117 00:22:55.772141 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.772265 kubelet[3360]: E0117 00:22:55.772161 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.772836 kubelet[3360]: E0117 00:22:55.772651 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.772836 kubelet[3360]: W0117 00:22:55.772665 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.772836 kubelet[3360]: E0117 00:22:55.772696 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.774271 kubelet[3360]: E0117 00:22:55.774241 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.774271 kubelet[3360]: W0117 00:22:55.774269 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.774404 kubelet[3360]: E0117 00:22:55.774369 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.774815 kubelet[3360]: E0117 00:22:55.774802 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.774815 kubelet[3360]: W0117 00:22:55.774816 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.774946 kubelet[3360]: E0117 00:22:55.774836 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.775291 kubelet[3360]: E0117 00:22:55.775272 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.775291 kubelet[3360]: W0117 00:22:55.775290 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.775416 kubelet[3360]: E0117 00:22:55.775319 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.775670 kubelet[3360]: E0117 00:22:55.775595 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.775670 kubelet[3360]: W0117 00:22:55.775608 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.775670 kubelet[3360]: E0117 00:22:55.775624 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.776664 kubelet[3360]: E0117 00:22:55.776641 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:22:55.776664 kubelet[3360]: W0117 00:22:55.776664 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:22:55.776784 kubelet[3360]: E0117 00:22:55.776677 3360 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:22:55.788424 containerd[2105]: time="2026-01-17T00:22:55.788356104Z" level=info msg="StartContainer for \"6a008aa5db4a48b23d871a6e6e667471929bf5158d72bce12d909f5ab4db1773\" returns successfully" Jan 17 00:22:55.859902 containerd[2105]: time="2026-01-17T00:22:55.835014919Z" level=info msg="shim disconnected" id=6a008aa5db4a48b23d871a6e6e667471929bf5158d72bce12d909f5ab4db1773 namespace=k8s.io Jan 17 00:22:55.860169 containerd[2105]: time="2026-01-17T00:22:55.859906717Z" level=warning msg="cleaning up after shim disconnected" id=6a008aa5db4a48b23d871a6e6e667471929bf5158d72bce12d909f5ab4db1773 namespace=k8s.io Jan 17 00:22:55.860169 containerd[2105]: time="2026-01-17T00:22:55.859928033Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:22:56.432602 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a008aa5db4a48b23d871a6e6e667471929bf5158d72bce12d909f5ab4db1773-rootfs.mount: Deactivated successfully. Jan 17 00:22:56.668779 containerd[2105]: time="2026-01-17T00:22:56.668477949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 17 00:22:57.431903 kubelet[3360]: E0117 00:22:57.431845 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hbb8z" podUID="d7198563-8b4e-4b52-ad88-2f9e6d09e79c" Jan 17 00:22:59.431605 kubelet[3360]: E0117 00:22:59.431538 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hbb8z" podUID="d7198563-8b4e-4b52-ad88-2f9e6d09e79c" Jan 17 00:22:59.850989 containerd[2105]: time="2026-01-17T00:22:59.850845448Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:59.852783 containerd[2105]: time="2026-01-17T00:22:59.852730424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 17 00:22:59.855440 containerd[2105]: time="2026-01-17T00:22:59.855378492Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:59.863961 containerd[2105]: time="2026-01-17T00:22:59.863908975Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:59.865635 containerd[2105]: time="2026-01-17T00:22:59.865009026Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.196480513s" Jan 17 00:22:59.865635 containerd[2105]: time="2026-01-17T00:22:59.865069936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 17 00:22:59.868311 containerd[2105]: time="2026-01-17T00:22:59.868257852Z" level=info msg="CreateContainer within sandbox \"9fc95aab26127374466793dfb289698a08ed719f3c820e8049f4393a974b2e90\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 00:22:59.909800 containerd[2105]: time="2026-01-17T00:22:59.909748353Z" level=info msg="CreateContainer within sandbox \"9fc95aab26127374466793dfb289698a08ed719f3c820e8049f4393a974b2e90\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"243d21f25457e5f967fd60e10b899dcdfbb70944efc37a3a851f37107add61d0\"" Jan 17 00:22:59.911719 containerd[2105]: time="2026-01-17T00:22:59.910785642Z" level=info msg="StartContainer for \"243d21f25457e5f967fd60e10b899dcdfbb70944efc37a3a851f37107add61d0\"" Jan 17 00:22:59.954530 systemd[1]: run-containerd-runc-k8s.io-243d21f25457e5f967fd60e10b899dcdfbb70944efc37a3a851f37107add61d0-runc.iaYCyH.mount: Deactivated successfully. Jan 17 00:23:00.017592 containerd[2105]: time="2026-01-17T00:23:00.017528686Z" level=info msg="StartContainer for \"243d21f25457e5f967fd60e10b899dcdfbb70944efc37a3a851f37107add61d0\" returns successfully" Jan 17 00:23:01.118353 systemd-resolved[1998]: Under memory pressure, flushing caches. Jan 17 00:23:01.120644 systemd-journald[1577]: Under memory pressure, flushing caches. Jan 17 00:23:01.118402 systemd-resolved[1998]: Flushed all caches. Jan 17 00:23:01.370725 containerd[2105]: time="2026-01-17T00:23:01.367240688Z" level=info msg="shim disconnected" id=243d21f25457e5f967fd60e10b899dcdfbb70944efc37a3a851f37107add61d0 namespace=k8s.io Jan 17 00:23:01.370725 containerd[2105]: time="2026-01-17T00:23:01.367317203Z" level=warning msg="cleaning up after shim disconnected" id=243d21f25457e5f967fd60e10b899dcdfbb70944efc37a3a851f37107add61d0 namespace=k8s.io Jan 17 00:23:01.370725 containerd[2105]: time="2026-01-17T00:23:01.367331901Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:23:01.368641 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-243d21f25457e5f967fd60e10b899dcdfbb70944efc37a3a851f37107add61d0-rootfs.mount: Deactivated successfully. Jan 17 00:23:01.381180 kubelet[3360]: I0117 00:23:01.380504 3360 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:23:01.394157 containerd[2105]: time="2026-01-17T00:23:01.393662181Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:23:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:23:01.464364 containerd[2105]: time="2026-01-17T00:23:01.461195834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hbb8z,Uid:d7198563-8b4e-4b52-ad88-2f9e6d09e79c,Namespace:calico-system,Attempt:0,}" Jan 17 00:23:01.667097 kubelet[3360]: I0117 00:23:01.665252 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2207401f-e738-47bd-8283-8eef3cbcb7c1-calico-apiserver-certs\") pod \"calico-apiserver-d8d9c5b87-7zrtb\" (UID: \"2207401f-e738-47bd-8283-8eef3cbcb7c1\") " pod="calico-apiserver/calico-apiserver-d8d9c5b87-7zrtb" Jan 17 00:23:01.667097 kubelet[3360]: I0117 00:23:01.665357 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8c5q\" (UniqueName: \"kubernetes.io/projected/2c85088d-5853-486f-a2a6-a1b33d923ebd-kube-api-access-m8c5q\") pod \"calico-kube-controllers-54bbb49cd4-pb4fm\" (UID: \"2c85088d-5853-486f-a2a6-a1b33d923ebd\") " pod="calico-system/calico-kube-controllers-54bbb49cd4-pb4fm" Jan 17 00:23:01.667097 kubelet[3360]: I0117 00:23:01.665543 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/804c4956-a77e-4057-9db7-9d50191156a3-goldmane-key-pair\") pod \"goldmane-666569f655-22ww4\" (UID: \"804c4956-a77e-4057-9db7-9d50191156a3\") " pod="calico-system/goldmane-666569f655-22ww4" Jan 17 00:23:01.668286 kubelet[3360]: I0117 00:23:01.667724 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba1548f7-6605-4885-a26c-3f894994808a-config-volume\") pod \"coredns-668d6bf9bc-tv59c\" (UID: \"ba1548f7-6605-4885-a26c-3f894994808a\") " pod="kube-system/coredns-668d6bf9bc-tv59c" Jan 17 00:23:01.668442 kubelet[3360]: I0117 00:23:01.668365 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2556d4d-d0d2-4d28-b339-25db37c044e7-whisker-ca-bundle\") pod \"whisker-5bbb6f8cc6-kdvhn\" (UID: \"e2556d4d-d0d2-4d28-b339-25db37c044e7\") " pod="calico-system/whisker-5bbb6f8cc6-kdvhn" Jan 17 00:23:01.668442 kubelet[3360]: I0117 00:23:01.668415 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/804c4956-a77e-4057-9db7-9d50191156a3-goldmane-ca-bundle\") pod \"goldmane-666569f655-22ww4\" (UID: \"804c4956-a77e-4057-9db7-9d50191156a3\") " pod="calico-system/goldmane-666569f655-22ww4" Jan 17 00:23:01.682287 kubelet[3360]: I0117 00:23:01.682142 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqrvg\" (UniqueName: \"kubernetes.io/projected/ba1548f7-6605-4885-a26c-3f894994808a-kube-api-access-hqrvg\") pod \"coredns-668d6bf9bc-tv59c\" (UID: \"ba1548f7-6605-4885-a26c-3f894994808a\") " pod="kube-system/coredns-668d6bf9bc-tv59c" Jan 17 00:23:01.682785 kubelet[3360]: I0117 00:23:01.682573 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxc9f\" (UniqueName: \"kubernetes.io/projected/19297f6f-5ccc-4eab-996b-36acef548d9c-kube-api-access-xxc9f\") pod \"calico-apiserver-d8d9c5b87-h9bhg\" (UID: \"19297f6f-5ccc-4eab-996b-36acef548d9c\") " pod="calico-apiserver/calico-apiserver-d8d9c5b87-h9bhg" Jan 17 00:23:01.682785 kubelet[3360]: I0117 00:23:01.682632 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh4hl\" (UniqueName: \"kubernetes.io/projected/2207401f-e738-47bd-8283-8eef3cbcb7c1-kube-api-access-gh4hl\") pod \"calico-apiserver-d8d9c5b87-7zrtb\" (UID: \"2207401f-e738-47bd-8283-8eef3cbcb7c1\") " pod="calico-apiserver/calico-apiserver-d8d9c5b87-7zrtb" Jan 17 00:23:01.683212 kubelet[3360]: I0117 00:23:01.682991 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2xzx\" (UniqueName: \"kubernetes.io/projected/e2556d4d-d0d2-4d28-b339-25db37c044e7-kube-api-access-d2xzx\") pod \"whisker-5bbb6f8cc6-kdvhn\" (UID: \"e2556d4d-d0d2-4d28-b339-25db37c044e7\") " pod="calico-system/whisker-5bbb6f8cc6-kdvhn" Jan 17 00:23:01.683629 kubelet[3360]: I0117 00:23:01.683037 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c85088d-5853-486f-a2a6-a1b33d923ebd-tigera-ca-bundle\") pod \"calico-kube-controllers-54bbb49cd4-pb4fm\" (UID: \"2c85088d-5853-486f-a2a6-a1b33d923ebd\") " pod="calico-system/calico-kube-controllers-54bbb49cd4-pb4fm" Jan 17 00:23:01.683629 kubelet[3360]: I0117 00:23:01.683398 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27bk9\" (UniqueName: \"kubernetes.io/projected/804c4956-a77e-4057-9db7-9d50191156a3-kube-api-access-27bk9\") pod \"goldmane-666569f655-22ww4\" (UID: \"804c4956-a77e-4057-9db7-9d50191156a3\") " pod="calico-system/goldmane-666569f655-22ww4" Jan 17 00:23:01.684172 kubelet[3360]: I0117 00:23:01.683804 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/19297f6f-5ccc-4eab-996b-36acef548d9c-calico-apiserver-certs\") pod \"calico-apiserver-d8d9c5b87-h9bhg\" (UID: \"19297f6f-5ccc-4eab-996b-36acef548d9c\") " pod="calico-apiserver/calico-apiserver-d8d9c5b87-h9bhg" Jan 17 00:23:01.684172 kubelet[3360]: I0117 00:23:01.683949 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/804c4956-a77e-4057-9db7-9d50191156a3-config\") pod \"goldmane-666569f655-22ww4\" (UID: \"804c4956-a77e-4057-9db7-9d50191156a3\") " pod="calico-system/goldmane-666569f655-22ww4" Jan 17 00:23:01.684172 kubelet[3360]: I0117 00:23:01.683981 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c635936a-f4da-49b1-a5f7-daacf10da049-config-volume\") pod \"coredns-668d6bf9bc-qm2t4\" (UID: \"c635936a-f4da-49b1-a5f7-daacf10da049\") " pod="kube-system/coredns-668d6bf9bc-qm2t4" Jan 17 00:23:01.685519 kubelet[3360]: I0117 00:23:01.685161 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpwnk\" (UniqueName: \"kubernetes.io/projected/c635936a-f4da-49b1-a5f7-daacf10da049-kube-api-access-cpwnk\") pod \"coredns-668d6bf9bc-qm2t4\" (UID: \"c635936a-f4da-49b1-a5f7-daacf10da049\") " pod="kube-system/coredns-668d6bf9bc-qm2t4" Jan 17 00:23:01.685519 kubelet[3360]: I0117 00:23:01.685452 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e2556d4d-d0d2-4d28-b339-25db37c044e7-whisker-backend-key-pair\") pod \"whisker-5bbb6f8cc6-kdvhn\" (UID: \"e2556d4d-d0d2-4d28-b339-25db37c044e7\") " pod="calico-system/whisker-5bbb6f8cc6-kdvhn" Jan 17 00:23:01.778402 containerd[2105]: time="2026-01-17T00:23:01.778358469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 17 00:23:01.951408 containerd[2105]: time="2026-01-17T00:23:01.950977655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tv59c,Uid:ba1548f7-6605-4885-a26c-3f894994808a,Namespace:kube-system,Attempt:0,}" Jan 17 00:23:01.951408 containerd[2105]: time="2026-01-17T00:23:01.951033728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54bbb49cd4-pb4fm,Uid:2c85088d-5853-486f-a2a6-a1b33d923ebd,Namespace:calico-system,Attempt:0,}" Jan 17 00:23:01.989650 containerd[2105]: time="2026-01-17T00:23:01.989343373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8d9c5b87-h9bhg,Uid:19297f6f-5ccc-4eab-996b-36acef548d9c,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:23:02.002919 containerd[2105]: time="2026-01-17T00:23:01.990772224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8d9c5b87-7zrtb,Uid:2207401f-e738-47bd-8283-8eef3cbcb7c1,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:23:02.015899 containerd[2105]: time="2026-01-17T00:23:02.015816690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bbb6f8cc6-kdvhn,Uid:e2556d4d-d0d2-4d28-b339-25db37c044e7,Namespace:calico-system,Attempt:0,}" Jan 17 00:23:02.030708 containerd[2105]: time="2026-01-17T00:23:02.030335905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qm2t4,Uid:c635936a-f4da-49b1-a5f7-daacf10da049,Namespace:kube-system,Attempt:0,}" Jan 17 00:23:02.032581 containerd[2105]: time="2026-01-17T00:23:02.032364245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-22ww4,Uid:804c4956-a77e-4057-9db7-9d50191156a3,Namespace:calico-system,Attempt:0,}" Jan 17 00:23:02.923576 containerd[2105]: time="2026-01-17T00:23:02.923521811Z" level=error msg="Failed to destroy network for sandbox \"fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:02.924862 containerd[2105]: time="2026-01-17T00:23:02.924813996Z" level=error msg="encountered an error cleaning up failed sandbox \"fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:02.975222 containerd[2105]: time="2026-01-17T00:23:02.974798107Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hbb8z,Uid:d7198563-8b4e-4b52-ad88-2f9e6d09e79c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:03.004770 kubelet[3360]: E0117 00:23:02.976156 3360 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:03.016195 kubelet[3360]: E0117 00:23:03.004820 3360 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hbb8z" Jan 17 00:23:03.018400 kubelet[3360]: E0117 00:23:03.016215 3360 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hbb8z" Jan 17 00:23:03.018400 kubelet[3360]: E0117 00:23:03.016292 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hbb8z_calico-system(d7198563-8b4e-4b52-ad88-2f9e6d09e79c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hbb8z_calico-system(d7198563-8b4e-4b52-ad88-2f9e6d09e79c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hbb8z" podUID="d7198563-8b4e-4b52-ad88-2f9e6d09e79c" Jan 17 00:23:03.123842 containerd[2105]: time="2026-01-17T00:23:03.123786901Z" level=error msg="Failed to destroy network for sandbox \"49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:03.134860 containerd[2105]: time="2026-01-17T00:23:03.134757015Z" level=error msg="encountered an error cleaning up failed sandbox \"49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:03.136016 containerd[2105]: time="2026-01-17T00:23:03.135319820Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8d9c5b87-7zrtb,Uid:2207401f-e738-47bd-8283-8eef3cbcb7c1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:03.139287 kubelet[3360]: E0117 00:23:03.136740 3360 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:03.139287 kubelet[3360]: E0117 00:23:03.136819 3360 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d8d9c5b87-7zrtb" Jan 17 00:23:03.139287 kubelet[3360]: E0117 00:23:03.136847 3360 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d8d9c5b87-7zrtb" Jan 17 00:23:03.139658 kubelet[3360]: E0117 00:23:03.136904 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d8d9c5b87-7zrtb_calico-apiserver(2207401f-e738-47bd-8283-8eef3cbcb7c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d8d9c5b87-7zrtb_calico-apiserver(2207401f-e738-47bd-8283-8eef3cbcb7c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d8d9c5b87-7zrtb" podUID="2207401f-e738-47bd-8283-8eef3cbcb7c1" Jan 17 00:23:03.169284 systemd-journald[1577]: Under memory pressure, flushing caches. Jan 17 00:23:03.168602 systemd-resolved[1998]: Under memory pressure, flushing caches. Jan 17 00:23:03.168636 systemd-resolved[1998]: Flushed all caches. Jan 17 00:23:03.184223 containerd[2105]: time="2026-01-17T00:23:03.183551416Z" level=error msg="Failed to destroy network for sandbox \"a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:03.184223 containerd[2105]: time="2026-01-17T00:23:03.183941853Z" level=error msg="encountered an error cleaning up failed sandbox \"a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:03.184223 containerd[2105]: time="2026-01-17T00:23:03.184001078Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tv59c,Uid:ba1548f7-6605-4885-a26c-3f894994808a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:03.186323 kubelet[3360]: E0117 00:23:03.184846 3360 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:03.186323 kubelet[3360]: E0117 00:23:03.184935 3360 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tv59c" Jan 17 00:23:03.186323 kubelet[3360]: E0117 00:23:03.184971 3360 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tv59c" Jan 17 00:23:03.186720 kubelet[3360]: E0117 00:23:03.185031 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-tv59c_kube-system(ba1548f7-6605-4885-a26c-3f894994808a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-tv59c_kube-system(ba1548f7-6605-4885-a26c-3f894994808a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-tv59c" podUID="ba1548f7-6605-4885-a26c-3f894994808a" Jan 17 00:23:03.250875 containerd[2105]: time="2026-01-17T00:23:03.247308488Z" level=error msg="Failed to destroy network for sandbox \"c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:03.250875 containerd[2105]: time="2026-01-17T00:23:03.247812543Z" level=error msg="encountered an error cleaning up failed sandbox \"c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:03.250875 containerd[2105]: time="2026-01-17T00:23:03.247871457Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8d9c5b87-h9bhg,Uid:19297f6f-5ccc-4eab-996b-36acef548d9c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:03.251509 kubelet[3360]: E0117 00:23:03.248165 3360 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:03.251509 kubelet[3360]: E0117 00:23:03.248223 3360 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d8d9c5b87-h9bhg" Jan 17 00:23:03.251509 kubelet[3360]: E0117 00:23:03.248250 3360 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d8d9c5b87-h9bhg" Jan 17 00:23:03.251930 kubelet[3360]: E0117 00:23:03.248304 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d8d9c5b87-h9bhg_calico-apiserver(19297f6f-5ccc-4eab-996b-36acef548d9c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d8d9c5b87-h9bhg_calico-apiserver(19297f6f-5ccc-4eab-996b-36acef548d9c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d8d9c5b87-h9bhg" podUID="19297f6f-5ccc-4eab-996b-36acef548d9c" Jan 17 00:23:03.286329 containerd[2105]: time="2026-01-17T00:23:03.286265696Z" level=error msg="Failed to destroy network for sandbox \"d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:03.286704 containerd[2105]: time="2026-01-17T00:23:03.286667272Z" level=error msg="encountered an error cleaning up failed sandbox \"d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:03.286793 containerd[2105]: time="2026-01-17T00:23:03.286735535Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bbb6f8cc6-kdvhn,Uid:e2556d4d-d0d2-4d28-b339-25db37c044e7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:03.287446 kubelet[3360]: E0117 00:23:03.286974 3360 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:03.287446 kubelet[3360]: E0117 00:23:03.287071 3360 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5bbb6f8cc6-kdvhn" Jan 17 00:23:03.287446 kubelet[3360]: E0117 00:23:03.287102 3360 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5bbb6f8cc6-kdvhn" Jan 17 00:23:03.287642 kubelet[3360]: E0117 00:23:03.287156 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5bbb6f8cc6-kdvhn_calico-system(e2556d4d-d0d2-4d28-b339-25db37c044e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5bbb6f8cc6-kdvhn_calico-system(e2556d4d-d0d2-4d28-b339-25db37c044e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5bbb6f8cc6-kdvhn" podUID="e2556d4d-d0d2-4d28-b339-25db37c044e7" Jan 17 00:23:03.295618 containerd[2105]: time="2026-01-17T00:23:03.295546312Z" level=error msg="Failed to destroy network for sandbox \"26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:03.296127 containerd[2105]: time="2026-01-17T00:23:03.296036613Z" level=error msg="encountered an error cleaning up failed sandbox \"26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:03.296222 containerd[2105]: time="2026-01-17T00:23:03.296177677Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54bbb49cd4-pb4fm,Uid:2c85088d-5853-486f-a2a6-a1b33d923ebd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:03.296592 kubelet[3360]: E0117 00:23:03.296443 3360 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:03.296592 kubelet[3360]: E0117 00:23:03.296523 3360 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54bbb49cd4-pb4fm" Jan 17 00:23:03.296592 kubelet[3360]: E0117 00:23:03.296556 3360 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54bbb49cd4-pb4fm" Jan 17 00:23:03.297188 kubelet[3360]: E0117 00:23:03.296831 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54bbb49cd4-pb4fm_calico-system(2c85088d-5853-486f-a2a6-a1b33d923ebd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54bbb49cd4-pb4fm_calico-system(2c85088d-5853-486f-a2a6-a1b33d923ebd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54bbb49cd4-pb4fm" podUID="2c85088d-5853-486f-a2a6-a1b33d923ebd" Jan 17 00:23:03.307128 containerd[2105]: time="2026-01-17T00:23:03.307028075Z" level=error msg="Failed to destroy network for sandbox \"36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:03.307673 containerd[2105]: time="2026-01-17T00:23:03.307631492Z" level=error msg="encountered an error cleaning up failed sandbox \"36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:03.307771 containerd[2105]: time="2026-01-17T00:23:03.307719968Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-22ww4,Uid:804c4956-a77e-4057-9db7-9d50191156a3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:03.308070 kubelet[3360]: E0117 00:23:03.308003 3360 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:03.308171 kubelet[3360]: E0117 00:23:03.308127 3360 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-22ww4" Jan 17 00:23:03.308224 kubelet[3360]: E0117 00:23:03.308181 3360 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-22ww4" Jan 17 00:23:03.308269 kubelet[3360]: E0117 00:23:03.308237 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-22ww4_calico-system(804c4956-a77e-4057-9db7-9d50191156a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-22ww4_calico-system(804c4956-a77e-4057-9db7-9d50191156a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-22ww4" podUID="804c4956-a77e-4057-9db7-9d50191156a3" Jan 17 00:23:03.321091 containerd[2105]: time="2026-01-17T00:23:03.321014488Z" level=error msg="Failed to destroy network for sandbox \"48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:03.321463 containerd[2105]: time="2026-01-17T00:23:03.321412132Z" level=error msg="encountered an error cleaning up failed sandbox \"48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:03.321580 containerd[2105]: time="2026-01-17T00:23:03.321489922Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qm2t4,Uid:c635936a-f4da-49b1-a5f7-daacf10da049,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:03.321795 kubelet[3360]: E0117 00:23:03.321751 3360 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:03.321960 kubelet[3360]: E0117 00:23:03.321817 3360 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qm2t4" Jan 17 00:23:03.321960 kubelet[3360]: E0117 00:23:03.321853 3360 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qm2t4" Jan 17 00:23:03.322299 kubelet[3360]: E0117 00:23:03.322248 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qm2t4_kube-system(c635936a-f4da-49b1-a5f7-daacf10da049)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qm2t4_kube-system(c635936a-f4da-49b1-a5f7-daacf10da049)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qm2t4" podUID="c635936a-f4da-49b1-a5f7-daacf10da049" Jan 17 00:23:03.392467 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518-shm.mount: Deactivated successfully. Jan 17 00:23:03.392727 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867-shm.mount: Deactivated successfully. Jan 17 00:23:03.392876 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580-shm.mount: Deactivated successfully. Jan 17 00:23:03.393013 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4-shm.mount: Deactivated successfully. Jan 17 00:23:03.394927 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1-shm.mount: Deactivated successfully. Jan 17 00:23:03.395138 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d-shm.mount: Deactivated successfully. Jan 17 00:23:03.826599 kubelet[3360]: I0117 00:23:03.826095 3360 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" Jan 17 00:23:03.830308 kubelet[3360]: I0117 00:23:03.830276 3360 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" Jan 17 00:23:03.863896 kubelet[3360]: I0117 00:23:03.862260 3360 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" Jan 17 00:23:03.873970 kubelet[3360]: I0117 00:23:03.873857 3360 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" Jan 17 00:23:03.883796 kubelet[3360]: I0117 00:23:03.881850 3360 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" Jan 17 00:23:03.892849 kubelet[3360]: I0117 00:23:03.892808 3360 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" Jan 17 00:23:03.903278 kubelet[3360]: I0117 00:23:03.903250 3360 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" Jan 17 00:23:03.910314 kubelet[3360]: I0117 00:23:03.910010 3360 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" Jan 17 00:23:03.936185 containerd[2105]: time="2026-01-17T00:23:03.936130261Z" level=info msg="StopPodSandbox for \"fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d\"" Jan 17 00:23:03.946117 containerd[2105]: time="2026-01-17T00:23:03.945216691Z" level=info msg="StopPodSandbox for \"c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580\"" Jan 17 00:23:03.946837 containerd[2105]: time="2026-01-17T00:23:03.946685589Z" level=info msg="Ensure that sandbox c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580 in task-service has been cleanup successfully" Jan 17 00:23:03.966141 containerd[2105]: time="2026-01-17T00:23:03.965315707Z" level=info msg="Ensure that sandbox fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d in task-service has been cleanup successfully" Jan 17 00:23:03.968237 containerd[2105]: time="2026-01-17T00:23:03.968195993Z" level=info msg="StopPodSandbox for \"48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518\"" Jan 17 00:23:03.968599 containerd[2105]: time="2026-01-17T00:23:03.968577018Z" level=info msg="Ensure that sandbox 48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518 in task-service has been cleanup successfully" Jan 17 00:23:03.980462 containerd[2105]: time="2026-01-17T00:23:03.980128020Z" level=info msg="StopPodSandbox for \"26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867\"" Jan 17 00:23:03.981352 containerd[2105]: time="2026-01-17T00:23:03.980682383Z" level=info msg="Ensure that sandbox 26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867 in task-service has been cleanup successfully" Jan 17 00:23:03.981352 containerd[2105]: time="2026-01-17T00:23:03.980823184Z" level=info msg="StopPodSandbox for \"a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1\"" Jan 17 00:23:03.981352 containerd[2105]: time="2026-01-17T00:23:03.980985942Z" level=info msg="Ensure that sandbox a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1 in task-service has been cleanup successfully" Jan 17 00:23:03.982216 containerd[2105]: time="2026-01-17T00:23:03.982179047Z" level=info msg="StopPodSandbox for \"36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195\"" Jan 17 00:23:03.982707 containerd[2105]: time="2026-01-17T00:23:03.982682985Z" level=info msg="Ensure that sandbox 36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195 in task-service has been cleanup successfully" Jan 17 00:23:03.985876 containerd[2105]: time="2026-01-17T00:23:03.985743140Z" level=info msg="StopPodSandbox for \"d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4\"" Jan 17 00:23:03.986010 containerd[2105]: time="2026-01-17T00:23:03.985955522Z" level=info msg="Ensure that sandbox d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4 in task-service has been cleanup successfully" Jan 17 00:23:03.987689 containerd[2105]: time="2026-01-17T00:23:03.987648241Z" level=info msg="StopPodSandbox for \"49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4\"" Jan 17 00:23:03.994201 containerd[2105]: time="2026-01-17T00:23:03.994144556Z" level=info msg="Ensure that sandbox 49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4 in task-service has been cleanup successfully" Jan 17 00:23:04.270518 containerd[2105]: time="2026-01-17T00:23:04.270302240Z" level=error msg="StopPodSandbox for \"d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4\" failed" error="failed to destroy network for sandbox \"d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:04.271028 containerd[2105]: time="2026-01-17T00:23:04.270909400Z" level=error msg="StopPodSandbox for \"c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580\" failed" error="failed to destroy network for sandbox \"c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:04.271932 kubelet[3360]: E0117 00:23:04.271885 3360 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" Jan 17 00:23:04.273003 kubelet[3360]: E0117 00:23:04.272747 3360 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" Jan 17 00:23:04.274066 containerd[2105]: time="2026-01-17T00:23:04.273262449Z" level=error msg="StopPodSandbox for \"36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195\" failed" error="failed to destroy network for sandbox \"36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:04.282219 kubelet[3360]: E0117 00:23:04.272953 3360 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580"} Jan 17 00:23:04.282391 kubelet[3360]: E0117 00:23:04.282289 3360 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"19297f6f-5ccc-4eab-996b-36acef548d9c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:23:04.282391 kubelet[3360]: E0117 00:23:04.272989 3360 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4"} Jan 17 00:23:04.282391 kubelet[3360]: E0117 00:23:04.282371 3360 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e2556d4d-d0d2-4d28-b339-25db37c044e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:23:04.283108 kubelet[3360]: E0117 00:23:04.282394 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e2556d4d-d0d2-4d28-b339-25db37c044e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5bbb6f8cc6-kdvhn" podUID="e2556d4d-d0d2-4d28-b339-25db37c044e7" Jan 17 00:23:04.283288 kubelet[3360]: E0117 00:23:04.283254 3360 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" Jan 17 00:23:04.283349 kubelet[3360]: E0117 00:23:04.283324 3360 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195"} Jan 17 00:23:04.283398 kubelet[3360]: E0117 00:23:04.283362 3360 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"804c4956-a77e-4057-9db7-9d50191156a3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:23:04.283675 kubelet[3360]: E0117 00:23:04.283410 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"804c4956-a77e-4057-9db7-9d50191156a3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-22ww4" podUID="804c4956-a77e-4057-9db7-9d50191156a3" Jan 17 00:23:04.287600 kubelet[3360]: E0117 00:23:04.282325 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"19297f6f-5ccc-4eab-996b-36acef548d9c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d8d9c5b87-h9bhg" podUID="19297f6f-5ccc-4eab-996b-36acef548d9c" Jan 17 00:23:04.287962 containerd[2105]: time="2026-01-17T00:23:04.287922603Z" level=error msg="StopPodSandbox for \"49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4\" failed" error="failed to destroy network for sandbox \"49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:04.289790 kubelet[3360]: E0117 00:23:04.289321 3360 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" Jan 17 00:23:04.289790 kubelet[3360]: E0117 00:23:04.289377 3360 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4"} Jan 17 00:23:04.289790 kubelet[3360]: E0117 00:23:04.289420 3360 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2207401f-e738-47bd-8283-8eef3cbcb7c1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:23:04.289790 kubelet[3360]: E0117 00:23:04.289451 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2207401f-e738-47bd-8283-8eef3cbcb7c1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d8d9c5b87-7zrtb" podUID="2207401f-e738-47bd-8283-8eef3cbcb7c1" Jan 17 00:23:04.290532 containerd[2105]: time="2026-01-17T00:23:04.290411234Z" level=error msg="StopPodSandbox for \"a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1\" failed" error="failed to destroy network for sandbox \"a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:04.291102 kubelet[3360]: E0117 00:23:04.290627 3360 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" Jan 17 00:23:04.291102 kubelet[3360]: E0117 00:23:04.290674 3360 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1"} Jan 17 00:23:04.291102 kubelet[3360]: E0117 00:23:04.290710 3360 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ba1548f7-6605-4885-a26c-3f894994808a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:23:04.291102 kubelet[3360]: E0117 00:23:04.290738 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ba1548f7-6605-4885-a26c-3f894994808a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-tv59c" podUID="ba1548f7-6605-4885-a26c-3f894994808a" Jan 17 00:23:04.291561 containerd[2105]: time="2026-01-17T00:23:04.291502329Z" level=error msg="StopPodSandbox for \"26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867\" failed" error="failed to destroy network for sandbox \"26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:04.291886 kubelet[3360]: E0117 00:23:04.291804 3360 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" Jan 17 00:23:04.291966 kubelet[3360]: E0117 00:23:04.291916 3360 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867"} Jan 17 00:23:04.291966 kubelet[3360]: E0117 00:23:04.291957 3360 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2c85088d-5853-486f-a2a6-a1b33d923ebd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:23:04.292677 kubelet[3360]: E0117 00:23:04.292000 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2c85088d-5853-486f-a2a6-a1b33d923ebd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54bbb49cd4-pb4fm" podUID="2c85088d-5853-486f-a2a6-a1b33d923ebd" Jan 17 00:23:04.293235 containerd[2105]: time="2026-01-17T00:23:04.293195992Z" level=error msg="StopPodSandbox for \"fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d\" failed" error="failed to destroy network for sandbox \"fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:04.293531 kubelet[3360]: E0117 00:23:04.293479 3360 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" Jan 17 00:23:04.293618 kubelet[3360]: E0117 00:23:04.293544 3360 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d"} Jan 17 00:23:04.293618 kubelet[3360]: E0117 00:23:04.293584 3360 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d7198563-8b4e-4b52-ad88-2f9e6d09e79c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:23:04.293743 kubelet[3360]: E0117 00:23:04.293617 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d7198563-8b4e-4b52-ad88-2f9e6d09e79c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hbb8z" podUID="d7198563-8b4e-4b52-ad88-2f9e6d09e79c" Jan 17 00:23:04.297967 containerd[2105]: time="2026-01-17T00:23:04.297914163Z" level=error msg="StopPodSandbox for \"48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518\" failed" error="failed to destroy network for sandbox \"48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:23:04.298713 kubelet[3360]: E0117 00:23:04.298333 3360 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" Jan 17 00:23:04.298713 kubelet[3360]: E0117 00:23:04.298437 3360 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518"} Jan 17 00:23:04.298713 kubelet[3360]: E0117 00:23:04.298509 3360 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c635936a-f4da-49b1-a5f7-daacf10da049\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:23:04.298713 kubelet[3360]: E0117 00:23:04.298564 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c635936a-f4da-49b1-a5f7-daacf10da049\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qm2t4" podUID="c635936a-f4da-49b1-a5f7-daacf10da049" Jan 17 00:23:07.134388 systemd-resolved[1998]: Under memory pressure, flushing caches. Jan 17 00:23:07.136756 systemd-journald[1577]: Under memory pressure, flushing caches. Jan 17 00:23:07.134422 systemd-resolved[1998]: Flushed all caches. Jan 17 00:23:09.184572 systemd-journald[1577]: Under memory pressure, flushing caches. Jan 17 00:23:09.182287 systemd-resolved[1998]: Under memory pressure, flushing caches. Jan 17 00:23:09.182296 systemd-resolved[1998]: Flushed all caches. Jan 17 00:23:09.557175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4129789477.mount: Deactivated successfully. Jan 17 00:23:09.666934 containerd[2105]: time="2026-01-17T00:23:09.666841526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 17 00:23:09.678174 containerd[2105]: time="2026-01-17T00:23:09.678119166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:23:09.716271 containerd[2105]: time="2026-01-17T00:23:09.715890104Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:23:09.718748 containerd[2105]: time="2026-01-17T00:23:09.718672500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:23:09.728647 containerd[2105]: time="2026-01-17T00:23:09.728584558Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.946863905s" Jan 17 00:23:09.728647 containerd[2105]: time="2026-01-17T00:23:09.728645658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 17 00:23:09.767786 containerd[2105]: time="2026-01-17T00:23:09.767721361Z" level=info msg="CreateContainer within sandbox \"9fc95aab26127374466793dfb289698a08ed719f3c820e8049f4393a974b2e90\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 00:23:09.821730 containerd[2105]: time="2026-01-17T00:23:09.821489249Z" level=info msg="CreateContainer within sandbox \"9fc95aab26127374466793dfb289698a08ed719f3c820e8049f4393a974b2e90\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2a9ac62098cbe44056cc962e8390c9b6530a74529de5c67e5db24a87f202ebf8\"" Jan 17 00:23:09.829717 containerd[2105]: time="2026-01-17T00:23:09.829527955Z" level=info msg="StartContainer for \"2a9ac62098cbe44056cc962e8390c9b6530a74529de5c67e5db24a87f202ebf8\"" Jan 17 00:23:10.073146 containerd[2105]: time="2026-01-17T00:23:10.071946660Z" level=info msg="StartContainer for \"2a9ac62098cbe44056cc962e8390c9b6530a74529de5c67e5db24a87f202ebf8\" returns successfully" Jan 17 00:23:10.216835 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 00:23:10.284501 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 00:23:11.232912 systemd-journald[1577]: Under memory pressure, flushing caches. Jan 17 00:23:11.230139 systemd-resolved[1998]: Under memory pressure, flushing caches. Jan 17 00:23:11.230148 systemd-resolved[1998]: Flushed all caches. Jan 17 00:23:12.563201 kubelet[3360]: I0117 00:23:12.558084 3360 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lvvlx" podStartSLOduration=4.00153458 podStartE2EDuration="21.53970952s" podCreationTimestamp="2026-01-17 00:22:51 +0000 UTC" firstStartedPulling="2026-01-17 00:22:52.191344872 +0000 UTC m=+21.939984988" lastFinishedPulling="2026-01-17 00:23:09.729519814 +0000 UTC m=+39.478159928" observedRunningTime="2026-01-17 00:23:11.042561901 +0000 UTC m=+40.791202030" watchObservedRunningTime="2026-01-17 00:23:12.53970952 +0000 UTC m=+42.288349644" Jan 17 00:23:12.578012 containerd[2105]: time="2026-01-17T00:23:12.577967172Z" level=info msg="StopPodSandbox for \"d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4\"" Jan 17 00:23:12.745166 kubelet[3360]: I0117 00:23:12.745074 3360 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:23:13.081280 containerd[2105]: 2026-01-17 00:23:12.704 [INFO][4915] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" Jan 17 00:23:13.081280 containerd[2105]: 2026-01-17 00:23:12.706 [INFO][4915] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" iface="eth0" netns="/var/run/netns/cni-ea593185-54ff-f802-5e1d-27f3d2351d01" Jan 17 00:23:13.081280 containerd[2105]: 2026-01-17 00:23:12.707 [INFO][4915] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" iface="eth0" netns="/var/run/netns/cni-ea593185-54ff-f802-5e1d-27f3d2351d01" Jan 17 00:23:13.081280 containerd[2105]: 2026-01-17 00:23:12.708 [INFO][4915] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" iface="eth0" netns="/var/run/netns/cni-ea593185-54ff-f802-5e1d-27f3d2351d01" Jan 17 00:23:13.081280 containerd[2105]: 2026-01-17 00:23:12.708 [INFO][4915] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" Jan 17 00:23:13.081280 containerd[2105]: 2026-01-17 00:23:12.708 [INFO][4915] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" Jan 17 00:23:13.081280 containerd[2105]: 2026-01-17 00:23:13.016 [INFO][4923] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" HandleID="k8s-pod-network.d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" Workload="ip--172--31--29--247-k8s-whisker--5bbb6f8cc6--kdvhn-eth0" Jan 17 00:23:13.081280 containerd[2105]: 2026-01-17 00:23:13.019 [INFO][4923] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:13.081280 containerd[2105]: 2026-01-17 00:23:13.019 [INFO][4923] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:13.081280 containerd[2105]: 2026-01-17 00:23:13.061 [WARNING][4923] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" HandleID="k8s-pod-network.d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" Workload="ip--172--31--29--247-k8s-whisker--5bbb6f8cc6--kdvhn-eth0" Jan 17 00:23:13.081280 containerd[2105]: 2026-01-17 00:23:13.061 [INFO][4923] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" HandleID="k8s-pod-network.d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" Workload="ip--172--31--29--247-k8s-whisker--5bbb6f8cc6--kdvhn-eth0" Jan 17 00:23:13.081280 containerd[2105]: 2026-01-17 00:23:13.067 [INFO][4923] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:13.081280 containerd[2105]: 2026-01-17 00:23:13.072 [INFO][4915] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" Jan 17 00:23:13.096207 containerd[2105]: time="2026-01-17T00:23:13.096116867Z" level=info msg="TearDown network for sandbox \"d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4\" successfully" Jan 17 00:23:13.096207 containerd[2105]: time="2026-01-17T00:23:13.096201718Z" level=info msg="StopPodSandbox for \"d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4\" returns successfully" Jan 17 00:23:13.099892 systemd[1]: run-netns-cni\x2dea593185\x2d54ff\x2df802\x2d5e1d\x2d27f3d2351d01.mount: Deactivated successfully. Jan 17 00:23:13.270602 kubelet[3360]: I0117 00:23:13.269697 3360 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2xzx\" (UniqueName: \"kubernetes.io/projected/e2556d4d-d0d2-4d28-b339-25db37c044e7-kube-api-access-d2xzx\") pod \"e2556d4d-d0d2-4d28-b339-25db37c044e7\" (UID: \"e2556d4d-d0d2-4d28-b339-25db37c044e7\") " Jan 17 00:23:13.270602 kubelet[3360]: I0117 00:23:13.269829 3360 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e2556d4d-d0d2-4d28-b339-25db37c044e7-whisker-backend-key-pair\") pod \"e2556d4d-d0d2-4d28-b339-25db37c044e7\" (UID: \"e2556d4d-d0d2-4d28-b339-25db37c044e7\") " Jan 17 00:23:13.270602 kubelet[3360]: I0117 00:23:13.269877 3360 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2556d4d-d0d2-4d28-b339-25db37c044e7-whisker-ca-bundle\") pod \"e2556d4d-d0d2-4d28-b339-25db37c044e7\" (UID: \"e2556d4d-d0d2-4d28-b339-25db37c044e7\") " Jan 17 00:23:13.282166 systemd-journald[1577]: Under memory pressure, flushing caches. Jan 17 00:23:13.278394 systemd-resolved[1998]: Under memory pressure, flushing caches. Jan 17 00:23:13.278429 systemd-resolved[1998]: Flushed all caches. Jan 17 00:23:13.282813 kubelet[3360]: I0117 00:23:13.277918 3360 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2556d4d-d0d2-4d28-b339-25db37c044e7-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e2556d4d-d0d2-4d28-b339-25db37c044e7" (UID: "e2556d4d-d0d2-4d28-b339-25db37c044e7"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:23:13.296992 kubelet[3360]: I0117 00:23:13.296272 3360 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2556d4d-d0d2-4d28-b339-25db37c044e7-kube-api-access-d2xzx" (OuterVolumeSpecName: "kube-api-access-d2xzx") pod "e2556d4d-d0d2-4d28-b339-25db37c044e7" (UID: "e2556d4d-d0d2-4d28-b339-25db37c044e7"). InnerVolumeSpecName "kube-api-access-d2xzx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:23:13.302080 kubelet[3360]: I0117 00:23:13.301900 3360 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2556d4d-d0d2-4d28-b339-25db37c044e7-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e2556d4d-d0d2-4d28-b339-25db37c044e7" (UID: "e2556d4d-d0d2-4d28-b339-25db37c044e7"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:23:13.303130 systemd[1]: var-lib-kubelet-pods-e2556d4d\x2dd0d2\x2d4d28\x2db339\x2d25db37c044e7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd2xzx.mount: Deactivated successfully. Jan 17 00:23:13.303343 systemd[1]: var-lib-kubelet-pods-e2556d4d\x2dd0d2\x2d4d28\x2db339\x2d25db37c044e7-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 17 00:23:13.374760 kubelet[3360]: I0117 00:23:13.374486 3360 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d2xzx\" (UniqueName: \"kubernetes.io/projected/e2556d4d-d0d2-4d28-b339-25db37c044e7-kube-api-access-d2xzx\") on node \"ip-172-31-29-247\" DevicePath \"\"" Jan 17 00:23:13.374760 kubelet[3360]: I0117 00:23:13.374607 3360 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e2556d4d-d0d2-4d28-b339-25db37c044e7-whisker-backend-key-pair\") on node \"ip-172-31-29-247\" DevicePath \"\"" Jan 17 00:23:13.374760 kubelet[3360]: I0117 00:23:13.374627 3360 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2556d4d-d0d2-4d28-b339-25db37c044e7-whisker-ca-bundle\") on node \"ip-172-31-29-247\" DevicePath \"\"" Jan 17 00:23:13.754015 kernel: bpftool[5009]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 00:23:14.123498 (udev-worker)[5029]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:23:14.127434 systemd-networkd[1659]: vxlan.calico: Link UP Jan 17 00:23:14.127448 systemd-networkd[1659]: vxlan.calico: Gained carrier Jan 17 00:23:14.385630 (udev-worker)[5042]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:23:14.439403 kubelet[3360]: I0117 00:23:14.439363 3360 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2556d4d-d0d2-4d28-b339-25db37c044e7" path="/var/lib/kubelet/pods/e2556d4d-d0d2-4d28-b339-25db37c044e7/volumes" Jan 17 00:23:14.486660 kubelet[3360]: I0117 00:23:14.486612 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6e903c9f-05d5-45fd-9d78-2d7516aa0977-whisker-backend-key-pair\") pod \"whisker-64d946f8bb-fs6r2\" (UID: \"6e903c9f-05d5-45fd-9d78-2d7516aa0977\") " pod="calico-system/whisker-64d946f8bb-fs6r2" Jan 17 00:23:14.487072 kubelet[3360]: I0117 00:23:14.486881 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzjv6\" (UniqueName: \"kubernetes.io/projected/6e903c9f-05d5-45fd-9d78-2d7516aa0977-kube-api-access-jzjv6\") pod \"whisker-64d946f8bb-fs6r2\" (UID: \"6e903c9f-05d5-45fd-9d78-2d7516aa0977\") " pod="calico-system/whisker-64d946f8bb-fs6r2" Jan 17 00:23:14.487072 kubelet[3360]: I0117 00:23:14.486917 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e903c9f-05d5-45fd-9d78-2d7516aa0977-whisker-ca-bundle\") pod \"whisker-64d946f8bb-fs6r2\" (UID: \"6e903c9f-05d5-45fd-9d78-2d7516aa0977\") " pod="calico-system/whisker-64d946f8bb-fs6r2" Jan 17 00:23:14.636129 containerd[2105]: time="2026-01-17T00:23:14.636079488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64d946f8bb-fs6r2,Uid:6e903c9f-05d5-45fd-9d78-2d7516aa0977,Namespace:calico-system,Attempt:0,}" Jan 17 00:23:14.909604 systemd-networkd[1659]: cali13f0be5e8e8: Link UP Jan 17 00:23:14.911176 systemd-networkd[1659]: cali13f0be5e8e8: Gained carrier Jan 17 00:23:14.913214 (udev-worker)[5025]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:23:14.941079 containerd[2105]: 2026-01-17 00:23:14.775 [INFO][5062] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--247-k8s-whisker--64d946f8bb--fs6r2-eth0 whisker-64d946f8bb- calico-system 6e903c9f-05d5-45fd-9d78-2d7516aa0977 893 0 2026-01-17 00:23:14 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:64d946f8bb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-29-247 whisker-64d946f8bb-fs6r2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali13f0be5e8e8 [] [] }} ContainerID="d7f84a411cf7b50264d44cfdf1f848c633b2c863fa93fd1ee91582cbe36834db" Namespace="calico-system" Pod="whisker-64d946f8bb-fs6r2" WorkloadEndpoint="ip--172--31--29--247-k8s-whisker--64d946f8bb--fs6r2-" Jan 17 00:23:14.941079 containerd[2105]: 2026-01-17 00:23:14.776 [INFO][5062] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d7f84a411cf7b50264d44cfdf1f848c633b2c863fa93fd1ee91582cbe36834db" Namespace="calico-system" Pod="whisker-64d946f8bb-fs6r2" WorkloadEndpoint="ip--172--31--29--247-k8s-whisker--64d946f8bb--fs6r2-eth0" Jan 17 00:23:14.941079 containerd[2105]: 2026-01-17 00:23:14.839 [INFO][5091] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7f84a411cf7b50264d44cfdf1f848c633b2c863fa93fd1ee91582cbe36834db" HandleID="k8s-pod-network.d7f84a411cf7b50264d44cfdf1f848c633b2c863fa93fd1ee91582cbe36834db" Workload="ip--172--31--29--247-k8s-whisker--64d946f8bb--fs6r2-eth0" Jan 17 00:23:14.941079 containerd[2105]: 2026-01-17 00:23:14.839 [INFO][5091] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d7f84a411cf7b50264d44cfdf1f848c633b2c863fa93fd1ee91582cbe36834db" HandleID="k8s-pod-network.d7f84a411cf7b50264d44cfdf1f848c633b2c863fa93fd1ee91582cbe36834db" Workload="ip--172--31--29--247-k8s-whisker--64d946f8bb--fs6r2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037b8d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-247", "pod":"whisker-64d946f8bb-fs6r2", "timestamp":"2026-01-17 00:23:14.839105619 +0000 UTC"}, Hostname:"ip-172-31-29-247", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:23:14.941079 containerd[2105]: 2026-01-17 00:23:14.839 [INFO][5091] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:14.941079 containerd[2105]: 2026-01-17 00:23:14.839 [INFO][5091] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:14.941079 containerd[2105]: 2026-01-17 00:23:14.839 [INFO][5091] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-247' Jan 17 00:23:14.941079 containerd[2105]: 2026-01-17 00:23:14.852 [INFO][5091] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d7f84a411cf7b50264d44cfdf1f848c633b2c863fa93fd1ee91582cbe36834db" host="ip-172-31-29-247" Jan 17 00:23:14.941079 containerd[2105]: 2026-01-17 00:23:14.870 [INFO][5091] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-247" Jan 17 00:23:14.941079 containerd[2105]: 2026-01-17 00:23:14.875 [INFO][5091] ipam/ipam.go 511: Trying affinity for 192.168.127.64/26 host="ip-172-31-29-247" Jan 17 00:23:14.941079 containerd[2105]: 2026-01-17 00:23:14.877 [INFO][5091] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.64/26 host="ip-172-31-29-247" Jan 17 00:23:14.941079 containerd[2105]: 2026-01-17 00:23:14.880 [INFO][5091] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.64/26 host="ip-172-31-29-247" Jan 17 00:23:14.941079 containerd[2105]: 2026-01-17 00:23:14.880 [INFO][5091] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.64/26 handle="k8s-pod-network.d7f84a411cf7b50264d44cfdf1f848c633b2c863fa93fd1ee91582cbe36834db" host="ip-172-31-29-247" Jan 17 00:23:14.941079 containerd[2105]: 2026-01-17 00:23:14.882 [INFO][5091] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d7f84a411cf7b50264d44cfdf1f848c633b2c863fa93fd1ee91582cbe36834db Jan 17 00:23:14.941079 containerd[2105]: 2026-01-17 00:23:14.888 [INFO][5091] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.64/26 handle="k8s-pod-network.d7f84a411cf7b50264d44cfdf1f848c633b2c863fa93fd1ee91582cbe36834db" host="ip-172-31-29-247" Jan 17 00:23:14.941079 containerd[2105]: 2026-01-17 00:23:14.896 [INFO][5091] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.65/26] block=192.168.127.64/26 handle="k8s-pod-network.d7f84a411cf7b50264d44cfdf1f848c633b2c863fa93fd1ee91582cbe36834db" host="ip-172-31-29-247" Jan 17 00:23:14.941079 containerd[2105]: 2026-01-17 00:23:14.897 [INFO][5091] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.65/26] handle="k8s-pod-network.d7f84a411cf7b50264d44cfdf1f848c633b2c863fa93fd1ee91582cbe36834db" host="ip-172-31-29-247" Jan 17 00:23:14.941079 containerd[2105]: 2026-01-17 00:23:14.897 [INFO][5091] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:14.941079 containerd[2105]: 2026-01-17 00:23:14.897 [INFO][5091] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.65/26] IPv6=[] ContainerID="d7f84a411cf7b50264d44cfdf1f848c633b2c863fa93fd1ee91582cbe36834db" HandleID="k8s-pod-network.d7f84a411cf7b50264d44cfdf1f848c633b2c863fa93fd1ee91582cbe36834db" Workload="ip--172--31--29--247-k8s-whisker--64d946f8bb--fs6r2-eth0" Jan 17 00:23:14.942349 containerd[2105]: 2026-01-17 00:23:14.902 [INFO][5062] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d7f84a411cf7b50264d44cfdf1f848c633b2c863fa93fd1ee91582cbe36834db" Namespace="calico-system" Pod="whisker-64d946f8bb-fs6r2" WorkloadEndpoint="ip--172--31--29--247-k8s-whisker--64d946f8bb--fs6r2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--247-k8s-whisker--64d946f8bb--fs6r2-eth0", GenerateName:"whisker-64d946f8bb-", Namespace:"calico-system", SelfLink:"", UID:"6e903c9f-05d5-45fd-9d78-2d7516aa0977", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64d946f8bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-247", ContainerID:"", Pod:"whisker-64d946f8bb-fs6r2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.127.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali13f0be5e8e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:14.942349 containerd[2105]: 2026-01-17 00:23:14.902 [INFO][5062] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.65/32] ContainerID="d7f84a411cf7b50264d44cfdf1f848c633b2c863fa93fd1ee91582cbe36834db" Namespace="calico-system" Pod="whisker-64d946f8bb-fs6r2" WorkloadEndpoint="ip--172--31--29--247-k8s-whisker--64d946f8bb--fs6r2-eth0" Jan 17 00:23:14.942349 containerd[2105]: 2026-01-17 00:23:14.902 [INFO][5062] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali13f0be5e8e8 ContainerID="d7f84a411cf7b50264d44cfdf1f848c633b2c863fa93fd1ee91582cbe36834db" Namespace="calico-system" Pod="whisker-64d946f8bb-fs6r2" WorkloadEndpoint="ip--172--31--29--247-k8s-whisker--64d946f8bb--fs6r2-eth0" Jan 17 00:23:14.942349 containerd[2105]: 2026-01-17 00:23:14.912 [INFO][5062] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7f84a411cf7b50264d44cfdf1f848c633b2c863fa93fd1ee91582cbe36834db" Namespace="calico-system" Pod="whisker-64d946f8bb-fs6r2" WorkloadEndpoint="ip--172--31--29--247-k8s-whisker--64d946f8bb--fs6r2-eth0" Jan 17 00:23:14.942349 containerd[2105]: 2026-01-17 00:23:14.913 [INFO][5062] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d7f84a411cf7b50264d44cfdf1f848c633b2c863fa93fd1ee91582cbe36834db" Namespace="calico-system" Pod="whisker-64d946f8bb-fs6r2" WorkloadEndpoint="ip--172--31--29--247-k8s-whisker--64d946f8bb--fs6r2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--247-k8s-whisker--64d946f8bb--fs6r2-eth0", GenerateName:"whisker-64d946f8bb-", Namespace:"calico-system", SelfLink:"", UID:"6e903c9f-05d5-45fd-9d78-2d7516aa0977", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64d946f8bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-247", ContainerID:"d7f84a411cf7b50264d44cfdf1f848c633b2c863fa93fd1ee91582cbe36834db", Pod:"whisker-64d946f8bb-fs6r2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.127.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali13f0be5e8e8", MAC:"9e:70:12:0c:f5:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:14.942349 containerd[2105]: 2026-01-17 00:23:14.929 [INFO][5062] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d7f84a411cf7b50264d44cfdf1f848c633b2c863fa93fd1ee91582cbe36834db" Namespace="calico-system" Pod="whisker-64d946f8bb-fs6r2" WorkloadEndpoint="ip--172--31--29--247-k8s-whisker--64d946f8bb--fs6r2-eth0" Jan 17 00:23:14.977569 containerd[2105]: time="2026-01-17T00:23:14.977431452Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:23:14.977569 containerd[2105]: time="2026-01-17T00:23:14.977510000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:23:14.977569 containerd[2105]: time="2026-01-17T00:23:14.977532830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:14.980584 containerd[2105]: time="2026-01-17T00:23:14.980223786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:15.153537 containerd[2105]: time="2026-01-17T00:23:15.152896452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64d946f8bb-fs6r2,Uid:6e903c9f-05d5-45fd-9d78-2d7516aa0977,Namespace:calico-system,Attempt:0,} returns sandbox id \"d7f84a411cf7b50264d44cfdf1f848c633b2c863fa93fd1ee91582cbe36834db\"" Jan 17 00:23:15.169579 containerd[2105]: time="2026-01-17T00:23:15.169316456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:23:15.199073 systemd-networkd[1659]: vxlan.calico: Gained IPv6LL Jan 17 00:23:15.434085 containerd[2105]: time="2026-01-17T00:23:15.432819934Z" level=info msg="StopPodSandbox for \"a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1\"" Jan 17 00:23:15.434085 containerd[2105]: time="2026-01-17T00:23:15.433341872Z" level=info msg="StopPodSandbox for \"49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4\"" Jan 17 00:23:15.436118 containerd[2105]: time="2026-01-17T00:23:15.436040997Z" level=info msg="StopPodSandbox for \"26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867\"" Jan 17 00:23:15.491421 containerd[2105]: time="2026-01-17T00:23:15.491234175Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:23:15.514076 containerd[2105]: time="2026-01-17T00:23:15.493725441Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:23:15.514479 containerd[2105]: time="2026-01-17T00:23:15.493786718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:23:15.516489 kubelet[3360]: E0117 00:23:15.516204 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:23:15.516950 kubelet[3360]: E0117 00:23:15.516516 3360 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:23:15.539160 kubelet[3360]: E0117 00:23:15.538794 3360 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d6ba05377925445eb2d7612d02a08bcf,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jzjv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64d946f8bb-fs6r2_calico-system(6e903c9f-05d5-45fd-9d78-2d7516aa0977): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:23:15.549065 containerd[2105]: time="2026-01-17T00:23:15.546519806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:23:15.673834 containerd[2105]: 2026-01-17 00:23:15.560 [INFO][5190] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" Jan 17 00:23:15.673834 containerd[2105]: 2026-01-17 00:23:15.561 [INFO][5190] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" iface="eth0" netns="/var/run/netns/cni-b6717edf-5ec5-830e-7cb3-84a68a92c3b2" Jan 17 00:23:15.673834 containerd[2105]: 2026-01-17 00:23:15.565 [INFO][5190] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" iface="eth0" netns="/var/run/netns/cni-b6717edf-5ec5-830e-7cb3-84a68a92c3b2" Jan 17 00:23:15.673834 containerd[2105]: 2026-01-17 00:23:15.568 [INFO][5190] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" iface="eth0" netns="/var/run/netns/cni-b6717edf-5ec5-830e-7cb3-84a68a92c3b2" Jan 17 00:23:15.673834 containerd[2105]: 2026-01-17 00:23:15.570 [INFO][5190] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" Jan 17 00:23:15.673834 containerd[2105]: 2026-01-17 00:23:15.570 [INFO][5190] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" Jan 17 00:23:15.673834 containerd[2105]: 2026-01-17 00:23:15.651 [INFO][5206] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" HandleID="k8s-pod-network.26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" Workload="ip--172--31--29--247-k8s-calico--kube--controllers--54bbb49cd4--pb4fm-eth0" Jan 17 00:23:15.673834 containerd[2105]: 2026-01-17 00:23:15.651 [INFO][5206] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:15.673834 containerd[2105]: 2026-01-17 00:23:15.651 [INFO][5206] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:15.673834 containerd[2105]: 2026-01-17 00:23:15.660 [WARNING][5206] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" HandleID="k8s-pod-network.26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" Workload="ip--172--31--29--247-k8s-calico--kube--controllers--54bbb49cd4--pb4fm-eth0" Jan 17 00:23:15.673834 containerd[2105]: 2026-01-17 00:23:15.660 [INFO][5206] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" HandleID="k8s-pod-network.26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" Workload="ip--172--31--29--247-k8s-calico--kube--controllers--54bbb49cd4--pb4fm-eth0" Jan 17 00:23:15.673834 containerd[2105]: 2026-01-17 00:23:15.662 [INFO][5206] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:15.673834 containerd[2105]: 2026-01-17 00:23:15.667 [INFO][5190] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" Jan 17 00:23:15.706695 containerd[2105]: time="2026-01-17T00:23:15.674361598Z" level=info msg="TearDown network for sandbox \"26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867\" successfully" Jan 17 00:23:15.706695 containerd[2105]: time="2026-01-17T00:23:15.674607785Z" level=info msg="StopPodSandbox for \"26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867\" returns successfully" Jan 17 00:23:15.706695 containerd[2105]: time="2026-01-17T00:23:15.682218462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54bbb49cd4-pb4fm,Uid:2c85088d-5853-486f-a2a6-a1b33d923ebd,Namespace:calico-system,Attempt:1,}" Jan 17 00:23:15.706695 containerd[2105]: 2026-01-17 00:23:15.581 [INFO][5189] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" Jan 17 00:23:15.706695 containerd[2105]: 2026-01-17 00:23:15.583 [INFO][5189] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" iface="eth0" netns="/var/run/netns/cni-bae16036-247f-fede-6a28-747479fddcb2" Jan 17 00:23:15.706695 containerd[2105]: 2026-01-17 00:23:15.584 [INFO][5189] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" iface="eth0" netns="/var/run/netns/cni-bae16036-247f-fede-6a28-747479fddcb2" Jan 17 00:23:15.706695 containerd[2105]: 2026-01-17 00:23:15.585 [INFO][5189] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" iface="eth0" netns="/var/run/netns/cni-bae16036-247f-fede-6a28-747479fddcb2" Jan 17 00:23:15.706695 containerd[2105]: 2026-01-17 00:23:15.585 [INFO][5189] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" Jan 17 00:23:15.706695 containerd[2105]: 2026-01-17 00:23:15.585 [INFO][5189] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" Jan 17 00:23:15.706695 containerd[2105]: 2026-01-17 00:23:15.659 [INFO][5211] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" HandleID="k8s-pod-network.49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" Workload="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--7zrtb-eth0" Jan 17 00:23:15.706695 containerd[2105]: 2026-01-17 00:23:15.659 [INFO][5211] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:15.706695 containerd[2105]: 2026-01-17 00:23:15.662 [INFO][5211] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:15.706695 containerd[2105]: 2026-01-17 00:23:15.671 [WARNING][5211] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" HandleID="k8s-pod-network.49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" Workload="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--7zrtb-eth0" Jan 17 00:23:15.706695 containerd[2105]: 2026-01-17 00:23:15.671 [INFO][5211] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" HandleID="k8s-pod-network.49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" Workload="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--7zrtb-eth0" Jan 17 00:23:15.706695 containerd[2105]: 2026-01-17 00:23:15.678 [INFO][5211] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:15.706695 containerd[2105]: 2026-01-17 00:23:15.684 [INFO][5189] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" Jan 17 00:23:15.706695 containerd[2105]: time="2026-01-17T00:23:15.689395887Z" level=info msg="TearDown network for sandbox \"49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4\" successfully" Jan 17 00:23:15.706695 containerd[2105]: time="2026-01-17T00:23:15.689426463Z" level=info msg="StopPodSandbox for \"49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4\" returns successfully" Jan 17 00:23:15.706695 containerd[2105]: time="2026-01-17T00:23:15.699868433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8d9c5b87-7zrtb,Uid:2207401f-e738-47bd-8283-8eef3cbcb7c1,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:23:15.680916 systemd[1]: run-netns-cni\x2db6717edf\x2d5ec5\x2d830e\x2d7cb3\x2d84a68a92c3b2.mount: Deactivated successfully. Jan 17 00:23:15.693492 systemd[1]: run-netns-cni\x2dbae16036\x2d247f\x2dfede\x2d6a28\x2d747479fddcb2.mount: Deactivated successfully. Jan 17 00:23:15.719605 containerd[2105]: 2026-01-17 00:23:15.591 [INFO][5182] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" Jan 17 00:23:15.719605 containerd[2105]: 2026-01-17 00:23:15.593 [INFO][5182] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" iface="eth0" netns="/var/run/netns/cni-8f4a837c-e4b0-866b-7349-ca1d6996105d" Jan 17 00:23:15.719605 containerd[2105]: 2026-01-17 00:23:15.593 [INFO][5182] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" iface="eth0" netns="/var/run/netns/cni-8f4a837c-e4b0-866b-7349-ca1d6996105d" Jan 17 00:23:15.719605 containerd[2105]: 2026-01-17 00:23:15.593 [INFO][5182] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" iface="eth0" netns="/var/run/netns/cni-8f4a837c-e4b0-866b-7349-ca1d6996105d" Jan 17 00:23:15.719605 containerd[2105]: 2026-01-17 00:23:15.594 [INFO][5182] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" Jan 17 00:23:15.719605 containerd[2105]: 2026-01-17 00:23:15.594 [INFO][5182] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" Jan 17 00:23:15.719605 containerd[2105]: 2026-01-17 00:23:15.688 [INFO][5216] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" HandleID="k8s-pod-network.a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" Workload="ip--172--31--29--247-k8s-coredns--668d6bf9bc--tv59c-eth0" Jan 17 00:23:15.719605 containerd[2105]: 2026-01-17 00:23:15.688 [INFO][5216] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:15.719605 containerd[2105]: 2026-01-17 00:23:15.688 [INFO][5216] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:15.719605 containerd[2105]: 2026-01-17 00:23:15.701 [WARNING][5216] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" HandleID="k8s-pod-network.a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" Workload="ip--172--31--29--247-k8s-coredns--668d6bf9bc--tv59c-eth0" Jan 17 00:23:15.719605 containerd[2105]: 2026-01-17 00:23:15.701 [INFO][5216] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" HandleID="k8s-pod-network.a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" Workload="ip--172--31--29--247-k8s-coredns--668d6bf9bc--tv59c-eth0" Jan 17 00:23:15.719605 containerd[2105]: 2026-01-17 00:23:15.703 [INFO][5216] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:15.719605 containerd[2105]: 2026-01-17 00:23:15.708 [INFO][5182] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" Jan 17 00:23:15.722738 containerd[2105]: time="2026-01-17T00:23:15.720347515Z" level=info msg="TearDown network for sandbox \"a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1\" successfully" Jan 17 00:23:15.722738 containerd[2105]: time="2026-01-17T00:23:15.720382360Z" level=info msg="StopPodSandbox for \"a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1\" returns successfully" Jan 17 00:23:15.723243 containerd[2105]: time="2026-01-17T00:23:15.723172400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tv59c,Uid:ba1548f7-6605-4885-a26c-3f894994808a,Namespace:kube-system,Attempt:1,}" Jan 17 00:23:15.726568 systemd[1]: run-netns-cni\x2d8f4a837c\x2de4b0\x2d866b\x2d7349\x2dca1d6996105d.mount: Deactivated successfully. Jan 17 00:23:15.824437 containerd[2105]: time="2026-01-17T00:23:15.824383812Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:23:15.827902 containerd[2105]: time="2026-01-17T00:23:15.827737121Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:23:15.827902 containerd[2105]: time="2026-01-17T00:23:15.827770571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:23:15.828708 kubelet[3360]: E0117 00:23:15.828419 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:23:15.828708 kubelet[3360]: E0117 00:23:15.828490 3360 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:23:15.828880 kubelet[3360]: E0117 00:23:15.828646 3360 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jzjv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64d946f8bb-fs6r2_calico-system(6e903c9f-05d5-45fd-9d78-2d7516aa0977): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:23:15.830480 kubelet[3360]: E0117 00:23:15.830378 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64d946f8bb-fs6r2" podUID="6e903c9f-05d5-45fd-9d78-2d7516aa0977" Jan 17 00:23:15.956840 systemd-networkd[1659]: cali033deb2ecb2: Link UP Jan 17 00:23:15.959711 systemd-networkd[1659]: cali033deb2ecb2: Gained carrier Jan 17 00:23:15.985996 containerd[2105]: 2026-01-17 00:23:15.814 [INFO][5227] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--247-k8s-calico--kube--controllers--54bbb49cd4--pb4fm-eth0 calico-kube-controllers-54bbb49cd4- calico-system 2c85088d-5853-486f-a2a6-a1b33d923ebd 905 0 2026-01-17 00:22:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:54bbb49cd4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-29-247 calico-kube-controllers-54bbb49cd4-pb4fm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali033deb2ecb2 [] [] }} ContainerID="9943749f05ee2fadf6f89ce6decafb30740631058c516ba758a579c407c9bb6b" Namespace="calico-system" Pod="calico-kube-controllers-54bbb49cd4-pb4fm" WorkloadEndpoint="ip--172--31--29--247-k8s-calico--kube--controllers--54bbb49cd4--pb4fm-" Jan 17 00:23:15.985996 containerd[2105]: 2026-01-17 00:23:15.814 [INFO][5227] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9943749f05ee2fadf6f89ce6decafb30740631058c516ba758a579c407c9bb6b" Namespace="calico-system" Pod="calico-kube-controllers-54bbb49cd4-pb4fm" WorkloadEndpoint="ip--172--31--29--247-k8s-calico--kube--controllers--54bbb49cd4--pb4fm-eth0" Jan 17 00:23:15.985996 containerd[2105]: 2026-01-17 00:23:15.889 [INFO][5266] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9943749f05ee2fadf6f89ce6decafb30740631058c516ba758a579c407c9bb6b" HandleID="k8s-pod-network.9943749f05ee2fadf6f89ce6decafb30740631058c516ba758a579c407c9bb6b" Workload="ip--172--31--29--247-k8s-calico--kube--controllers--54bbb49cd4--pb4fm-eth0" Jan 17 00:23:15.985996 containerd[2105]: 2026-01-17 00:23:15.892 [INFO][5266] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9943749f05ee2fadf6f89ce6decafb30740631058c516ba758a579c407c9bb6b" HandleID="k8s-pod-network.9943749f05ee2fadf6f89ce6decafb30740631058c516ba758a579c407c9bb6b" Workload="ip--172--31--29--247-k8s-calico--kube--controllers--54bbb49cd4--pb4fm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd5a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-247", "pod":"calico-kube-controllers-54bbb49cd4-pb4fm", "timestamp":"2026-01-17 00:23:15.889972719 +0000 UTC"}, Hostname:"ip-172-31-29-247", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:23:15.985996 containerd[2105]: 2026-01-17 00:23:15.892 [INFO][5266] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:15.985996 containerd[2105]: 2026-01-17 00:23:15.892 [INFO][5266] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:15.985996 containerd[2105]: 2026-01-17 00:23:15.893 [INFO][5266] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-247' Jan 17 00:23:15.985996 containerd[2105]: 2026-01-17 00:23:15.908 [INFO][5266] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9943749f05ee2fadf6f89ce6decafb30740631058c516ba758a579c407c9bb6b" host="ip-172-31-29-247" Jan 17 00:23:15.985996 containerd[2105]: 2026-01-17 00:23:15.914 [INFO][5266] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-247" Jan 17 00:23:15.985996 containerd[2105]: 2026-01-17 00:23:15.922 [INFO][5266] ipam/ipam.go 511: Trying affinity for 192.168.127.64/26 host="ip-172-31-29-247" Jan 17 00:23:15.985996 containerd[2105]: 2026-01-17 00:23:15.925 [INFO][5266] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.64/26 host="ip-172-31-29-247" Jan 17 00:23:15.985996 containerd[2105]: 2026-01-17 00:23:15.928 [INFO][5266] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.64/26 host="ip-172-31-29-247" Jan 17 00:23:15.985996 containerd[2105]: 2026-01-17 00:23:15.928 [INFO][5266] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.64/26 handle="k8s-pod-network.9943749f05ee2fadf6f89ce6decafb30740631058c516ba758a579c407c9bb6b" host="ip-172-31-29-247" Jan 17 00:23:15.985996 containerd[2105]: 2026-01-17 00:23:15.930 [INFO][5266] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9943749f05ee2fadf6f89ce6decafb30740631058c516ba758a579c407c9bb6b Jan 17 00:23:15.985996 containerd[2105]: 2026-01-17 00:23:15.937 [INFO][5266] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.64/26 handle="k8s-pod-network.9943749f05ee2fadf6f89ce6decafb30740631058c516ba758a579c407c9bb6b" host="ip-172-31-29-247" Jan 17 00:23:15.985996 containerd[2105]: 2026-01-17 00:23:15.946 [INFO][5266] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.66/26] block=192.168.127.64/26 handle="k8s-pod-network.9943749f05ee2fadf6f89ce6decafb30740631058c516ba758a579c407c9bb6b" host="ip-172-31-29-247" Jan 17 00:23:15.985996 containerd[2105]: 2026-01-17 00:23:15.946 [INFO][5266] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.66/26] handle="k8s-pod-network.9943749f05ee2fadf6f89ce6decafb30740631058c516ba758a579c407c9bb6b" host="ip-172-31-29-247" Jan 17 00:23:15.985996 containerd[2105]: 2026-01-17 00:23:15.947 [INFO][5266] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:15.985996 containerd[2105]: 2026-01-17 00:23:15.947 [INFO][5266] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.66/26] IPv6=[] ContainerID="9943749f05ee2fadf6f89ce6decafb30740631058c516ba758a579c407c9bb6b" HandleID="k8s-pod-network.9943749f05ee2fadf6f89ce6decafb30740631058c516ba758a579c407c9bb6b" Workload="ip--172--31--29--247-k8s-calico--kube--controllers--54bbb49cd4--pb4fm-eth0" Jan 17 00:23:15.988005 containerd[2105]: 2026-01-17 00:23:15.949 [INFO][5227] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9943749f05ee2fadf6f89ce6decafb30740631058c516ba758a579c407c9bb6b" Namespace="calico-system" Pod="calico-kube-controllers-54bbb49cd4-pb4fm" WorkloadEndpoint="ip--172--31--29--247-k8s-calico--kube--controllers--54bbb49cd4--pb4fm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--247-k8s-calico--kube--controllers--54bbb49cd4--pb4fm-eth0", GenerateName:"calico-kube-controllers-54bbb49cd4-", Namespace:"calico-system", SelfLink:"", UID:"2c85088d-5853-486f-a2a6-a1b33d923ebd", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54bbb49cd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-247", ContainerID:"", Pod:"calico-kube-controllers-54bbb49cd4-pb4fm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.127.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali033deb2ecb2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:15.988005 containerd[2105]: 2026-01-17 00:23:15.949 [INFO][5227] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.66/32] ContainerID="9943749f05ee2fadf6f89ce6decafb30740631058c516ba758a579c407c9bb6b" Namespace="calico-system" Pod="calico-kube-controllers-54bbb49cd4-pb4fm" WorkloadEndpoint="ip--172--31--29--247-k8s-calico--kube--controllers--54bbb49cd4--pb4fm-eth0" Jan 17 00:23:15.988005 containerd[2105]: 2026-01-17 00:23:15.949 [INFO][5227] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali033deb2ecb2 ContainerID="9943749f05ee2fadf6f89ce6decafb30740631058c516ba758a579c407c9bb6b" Namespace="calico-system" Pod="calico-kube-controllers-54bbb49cd4-pb4fm" WorkloadEndpoint="ip--172--31--29--247-k8s-calico--kube--controllers--54bbb49cd4--pb4fm-eth0" Jan 17 00:23:15.988005 containerd[2105]: 2026-01-17 00:23:15.956 [INFO][5227] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9943749f05ee2fadf6f89ce6decafb30740631058c516ba758a579c407c9bb6b" Namespace="calico-system" Pod="calico-kube-controllers-54bbb49cd4-pb4fm" WorkloadEndpoint="ip--172--31--29--247-k8s-calico--kube--controllers--54bbb49cd4--pb4fm-eth0" Jan 17 00:23:15.988005 containerd[2105]: 2026-01-17 00:23:15.957 [INFO][5227] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9943749f05ee2fadf6f89ce6decafb30740631058c516ba758a579c407c9bb6b" Namespace="calico-system" Pod="calico-kube-controllers-54bbb49cd4-pb4fm" WorkloadEndpoint="ip--172--31--29--247-k8s-calico--kube--controllers--54bbb49cd4--pb4fm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--247-k8s-calico--kube--controllers--54bbb49cd4--pb4fm-eth0", GenerateName:"calico-kube-controllers-54bbb49cd4-", Namespace:"calico-system", SelfLink:"", UID:"2c85088d-5853-486f-a2a6-a1b33d923ebd", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54bbb49cd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-247", ContainerID:"9943749f05ee2fadf6f89ce6decafb30740631058c516ba758a579c407c9bb6b", Pod:"calico-kube-controllers-54bbb49cd4-pb4fm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.127.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali033deb2ecb2", MAC:"16:53:6b:3f:28:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:15.988005 containerd[2105]: 2026-01-17 00:23:15.977 [INFO][5227] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9943749f05ee2fadf6f89ce6decafb30740631058c516ba758a579c407c9bb6b" Namespace="calico-system" Pod="calico-kube-controllers-54bbb49cd4-pb4fm" WorkloadEndpoint="ip--172--31--29--247-k8s-calico--kube--controllers--54bbb49cd4--pb4fm-eth0" Jan 17 00:23:16.031384 containerd[2105]: time="2026-01-17T00:23:16.030239600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:23:16.031384 containerd[2105]: time="2026-01-17T00:23:16.030302426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:23:16.031384 containerd[2105]: time="2026-01-17T00:23:16.030318396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:16.031384 containerd[2105]: time="2026-01-17T00:23:16.030501913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:16.040849 kubelet[3360]: E0117 00:23:16.040764 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64d946f8bb-fs6r2" podUID="6e903c9f-05d5-45fd-9d78-2d7516aa0977" Jan 17 00:23:16.144506 systemd-networkd[1659]: cali2f87eb0c668: Link UP Jan 17 00:23:16.146958 systemd-networkd[1659]: cali2f87eb0c668: Gained carrier Jan 17 00:23:16.192068 containerd[2105]: 2026-01-17 00:23:15.808 [INFO][5236] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--7zrtb-eth0 calico-apiserver-d8d9c5b87- calico-apiserver 2207401f-e738-47bd-8283-8eef3cbcb7c1 907 0 2026-01-17 00:22:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d8d9c5b87 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-29-247 calico-apiserver-d8d9c5b87-7zrtb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2f87eb0c668 [] [] }} ContainerID="6122deef238d4a59feb94fa3c79e7990eb69121d11e20992ae4d176ed6e2bc20" Namespace="calico-apiserver" Pod="calico-apiserver-d8d9c5b87-7zrtb" WorkloadEndpoint="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--7zrtb-" Jan 17 00:23:16.192068 containerd[2105]: 2026-01-17 00:23:15.809 [INFO][5236] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6122deef238d4a59feb94fa3c79e7990eb69121d11e20992ae4d176ed6e2bc20" Namespace="calico-apiserver" Pod="calico-apiserver-d8d9c5b87-7zrtb" WorkloadEndpoint="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--7zrtb-eth0" Jan 17 00:23:16.192068 containerd[2105]: 2026-01-17 00:23:15.907 [INFO][5261] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6122deef238d4a59feb94fa3c79e7990eb69121d11e20992ae4d176ed6e2bc20" HandleID="k8s-pod-network.6122deef238d4a59feb94fa3c79e7990eb69121d11e20992ae4d176ed6e2bc20" Workload="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--7zrtb-eth0" Jan 17 00:23:16.192068 containerd[2105]: 2026-01-17 00:23:15.910 [INFO][5261] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6122deef238d4a59feb94fa3c79e7990eb69121d11e20992ae4d176ed6e2bc20" HandleID="k8s-pod-network.6122deef238d4a59feb94fa3c79e7990eb69121d11e20992ae4d176ed6e2bc20" Workload="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--7zrtb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033a140), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-29-247", "pod":"calico-apiserver-d8d9c5b87-7zrtb", "timestamp":"2026-01-17 00:23:15.907862746 +0000 UTC"}, Hostname:"ip-172-31-29-247", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:23:16.192068 containerd[2105]: 2026-01-17 00:23:15.910 [INFO][5261] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:16.192068 containerd[2105]: 2026-01-17 00:23:15.946 [INFO][5261] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:16.192068 containerd[2105]: 2026-01-17 00:23:15.946 [INFO][5261] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-247' Jan 17 00:23:16.192068 containerd[2105]: 2026-01-17 00:23:16.008 [INFO][5261] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6122deef238d4a59feb94fa3c79e7990eb69121d11e20992ae4d176ed6e2bc20" host="ip-172-31-29-247" Jan 17 00:23:16.192068 containerd[2105]: 2026-01-17 00:23:16.030 [INFO][5261] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-247" Jan 17 00:23:16.192068 containerd[2105]: 2026-01-17 00:23:16.049 [INFO][5261] ipam/ipam.go 511: Trying affinity for 192.168.127.64/26 host="ip-172-31-29-247" Jan 17 00:23:16.192068 containerd[2105]: 2026-01-17 00:23:16.065 [INFO][5261] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.64/26 host="ip-172-31-29-247" Jan 17 00:23:16.192068 containerd[2105]: 2026-01-17 00:23:16.072 [INFO][5261] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.64/26 host="ip-172-31-29-247" Jan 17 00:23:16.192068 containerd[2105]: 2026-01-17 00:23:16.072 [INFO][5261] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.64/26 handle="k8s-pod-network.6122deef238d4a59feb94fa3c79e7990eb69121d11e20992ae4d176ed6e2bc20" host="ip-172-31-29-247" Jan 17 00:23:16.192068 containerd[2105]: 2026-01-17 00:23:16.079 [INFO][5261] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6122deef238d4a59feb94fa3c79e7990eb69121d11e20992ae4d176ed6e2bc20 Jan 17 00:23:16.192068 containerd[2105]: 2026-01-17 00:23:16.096 [INFO][5261] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.64/26 handle="k8s-pod-network.6122deef238d4a59feb94fa3c79e7990eb69121d11e20992ae4d176ed6e2bc20" host="ip-172-31-29-247" Jan 17 00:23:16.192068 containerd[2105]: 2026-01-17 00:23:16.107 [INFO][5261] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.67/26] block=192.168.127.64/26 handle="k8s-pod-network.6122deef238d4a59feb94fa3c79e7990eb69121d11e20992ae4d176ed6e2bc20" host="ip-172-31-29-247" Jan 17 00:23:16.192068 containerd[2105]: 2026-01-17 00:23:16.107 [INFO][5261] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.67/26] handle="k8s-pod-network.6122deef238d4a59feb94fa3c79e7990eb69121d11e20992ae4d176ed6e2bc20" host="ip-172-31-29-247" Jan 17 00:23:16.192068 containerd[2105]: 2026-01-17 00:23:16.107 [INFO][5261] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:16.192068 containerd[2105]: 2026-01-17 00:23:16.107 [INFO][5261] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.67/26] IPv6=[] ContainerID="6122deef238d4a59feb94fa3c79e7990eb69121d11e20992ae4d176ed6e2bc20" HandleID="k8s-pod-network.6122deef238d4a59feb94fa3c79e7990eb69121d11e20992ae4d176ed6e2bc20" Workload="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--7zrtb-eth0" Jan 17 00:23:16.193935 containerd[2105]: 2026-01-17 00:23:16.125 [INFO][5236] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6122deef238d4a59feb94fa3c79e7990eb69121d11e20992ae4d176ed6e2bc20" Namespace="calico-apiserver" Pod="calico-apiserver-d8d9c5b87-7zrtb" WorkloadEndpoint="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--7zrtb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--7zrtb-eth0", GenerateName:"calico-apiserver-d8d9c5b87-", Namespace:"calico-apiserver", SelfLink:"", UID:"2207401f-e738-47bd-8283-8eef3cbcb7c1", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d8d9c5b87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-247", ContainerID:"", Pod:"calico-apiserver-d8d9c5b87-7zrtb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2f87eb0c668", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:16.193935 containerd[2105]: 2026-01-17 00:23:16.128 [INFO][5236] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.67/32] ContainerID="6122deef238d4a59feb94fa3c79e7990eb69121d11e20992ae4d176ed6e2bc20" Namespace="calico-apiserver" Pod="calico-apiserver-d8d9c5b87-7zrtb" WorkloadEndpoint="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--7zrtb-eth0" Jan 17 00:23:16.193935 containerd[2105]: 2026-01-17 00:23:16.128 [INFO][5236] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f87eb0c668 ContainerID="6122deef238d4a59feb94fa3c79e7990eb69121d11e20992ae4d176ed6e2bc20" Namespace="calico-apiserver" Pod="calico-apiserver-d8d9c5b87-7zrtb" WorkloadEndpoint="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--7zrtb-eth0" Jan 17 00:23:16.193935 containerd[2105]: 2026-01-17 00:23:16.147 [INFO][5236] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6122deef238d4a59feb94fa3c79e7990eb69121d11e20992ae4d176ed6e2bc20" Namespace="calico-apiserver" Pod="calico-apiserver-d8d9c5b87-7zrtb" WorkloadEndpoint="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--7zrtb-eth0" Jan 17 00:23:16.193935 containerd[2105]: 2026-01-17 00:23:16.148 [INFO][5236] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6122deef238d4a59feb94fa3c79e7990eb69121d11e20992ae4d176ed6e2bc20" Namespace="calico-apiserver" Pod="calico-apiserver-d8d9c5b87-7zrtb" WorkloadEndpoint="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--7zrtb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--7zrtb-eth0", GenerateName:"calico-apiserver-d8d9c5b87-", Namespace:"calico-apiserver", SelfLink:"", UID:"2207401f-e738-47bd-8283-8eef3cbcb7c1", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d8d9c5b87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-247", ContainerID:"6122deef238d4a59feb94fa3c79e7990eb69121d11e20992ae4d176ed6e2bc20", Pod:"calico-apiserver-d8d9c5b87-7zrtb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2f87eb0c668", MAC:"92:c8:a8:09:34:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:16.193935 containerd[2105]: 2026-01-17 00:23:16.176 [INFO][5236] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6122deef238d4a59feb94fa3c79e7990eb69121d11e20992ae4d176ed6e2bc20" Namespace="calico-apiserver" Pod="calico-apiserver-d8d9c5b87-7zrtb" WorkloadEndpoint="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--7zrtb-eth0" Jan 17 00:23:16.205481 containerd[2105]: time="2026-01-17T00:23:16.205434577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54bbb49cd4-pb4fm,Uid:2c85088d-5853-486f-a2a6-a1b33d923ebd,Namespace:calico-system,Attempt:1,} returns sandbox id \"9943749f05ee2fadf6f89ce6decafb30740631058c516ba758a579c407c9bb6b\"" Jan 17 00:23:16.207742 containerd[2105]: time="2026-01-17T00:23:16.207504040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:23:16.240007 systemd-networkd[1659]: cali97d0b4063b2: Link UP Jan 17 00:23:16.246120 systemd-networkd[1659]: cali97d0b4063b2: Gained carrier Jan 17 00:23:16.275889 containerd[2105]: time="2026-01-17T00:23:16.271311986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:23:16.275889 containerd[2105]: time="2026-01-17T00:23:16.271398136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:23:16.275889 containerd[2105]: time="2026-01-17T00:23:16.271418909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:16.275889 containerd[2105]: time="2026-01-17T00:23:16.271535904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:16.277977 containerd[2105]: 2026-01-17 00:23:15.875 [INFO][5248] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--247-k8s-coredns--668d6bf9bc--tv59c-eth0 coredns-668d6bf9bc- kube-system ba1548f7-6605-4885-a26c-3f894994808a 908 0 2026-01-17 00:22:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-247 coredns-668d6bf9bc-tv59c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali97d0b4063b2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fd0ccc5ba3336c8cd927145b94c1878e9d6c27584f56f52f6c7a4e0d1d46ed12" Namespace="kube-system" Pod="coredns-668d6bf9bc-tv59c" WorkloadEndpoint="ip--172--31--29--247-k8s-coredns--668d6bf9bc--tv59c-" Jan 17 00:23:16.277977 containerd[2105]: 2026-01-17 00:23:15.875 [INFO][5248] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fd0ccc5ba3336c8cd927145b94c1878e9d6c27584f56f52f6c7a4e0d1d46ed12" Namespace="kube-system" Pod="coredns-668d6bf9bc-tv59c" WorkloadEndpoint="ip--172--31--29--247-k8s-coredns--668d6bf9bc--tv59c-eth0" Jan 17 00:23:16.277977 containerd[2105]: 2026-01-17 00:23:15.931 [INFO][5273] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd0ccc5ba3336c8cd927145b94c1878e9d6c27584f56f52f6c7a4e0d1d46ed12" HandleID="k8s-pod-network.fd0ccc5ba3336c8cd927145b94c1878e9d6c27584f56f52f6c7a4e0d1d46ed12" Workload="ip--172--31--29--247-k8s-coredns--668d6bf9bc--tv59c-eth0" Jan 17 00:23:16.277977 containerd[2105]: 2026-01-17 00:23:15.932 [INFO][5273] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fd0ccc5ba3336c8cd927145b94c1878e9d6c27584f56f52f6c7a4e0d1d46ed12" HandleID="k8s-pod-network.fd0ccc5ba3336c8cd927145b94c1878e9d6c27584f56f52f6c7a4e0d1d46ed12" Workload="ip--172--31--29--247-k8s-coredns--668d6bf9bc--tv59c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-247", "pod":"coredns-668d6bf9bc-tv59c", "timestamp":"2026-01-17 00:23:15.931514819 +0000 UTC"}, Hostname:"ip-172-31-29-247", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:23:16.277977 containerd[2105]: 2026-01-17 00:23:15.932 [INFO][5273] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:16.277977 containerd[2105]: 2026-01-17 00:23:16.108 [INFO][5273] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:16.277977 containerd[2105]: 2026-01-17 00:23:16.108 [INFO][5273] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-247' Jan 17 00:23:16.277977 containerd[2105]: 2026-01-17 00:23:16.134 [INFO][5273] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fd0ccc5ba3336c8cd927145b94c1878e9d6c27584f56f52f6c7a4e0d1d46ed12" host="ip-172-31-29-247" Jan 17 00:23:16.277977 containerd[2105]: 2026-01-17 00:23:16.174 [INFO][5273] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-247" Jan 17 00:23:16.277977 containerd[2105]: 2026-01-17 00:23:16.183 [INFO][5273] ipam/ipam.go 511: Trying affinity for 192.168.127.64/26 host="ip-172-31-29-247" Jan 17 00:23:16.277977 containerd[2105]: 2026-01-17 00:23:16.186 [INFO][5273] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.64/26 host="ip-172-31-29-247" Jan 17 00:23:16.277977 containerd[2105]: 2026-01-17 00:23:16.189 [INFO][5273] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.64/26 host="ip-172-31-29-247" Jan 17 00:23:16.277977 containerd[2105]: 2026-01-17 00:23:16.189 [INFO][5273] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.64/26 handle="k8s-pod-network.fd0ccc5ba3336c8cd927145b94c1878e9d6c27584f56f52f6c7a4e0d1d46ed12" host="ip-172-31-29-247" Jan 17 00:23:16.277977 containerd[2105]: 2026-01-17 00:23:16.191 [INFO][5273] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fd0ccc5ba3336c8cd927145b94c1878e9d6c27584f56f52f6c7a4e0d1d46ed12 Jan 17 00:23:16.277977 containerd[2105]: 2026-01-17 00:23:16.210 [INFO][5273] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.64/26 handle="k8s-pod-network.fd0ccc5ba3336c8cd927145b94c1878e9d6c27584f56f52f6c7a4e0d1d46ed12" host="ip-172-31-29-247" Jan 17 00:23:16.277977 containerd[2105]: 2026-01-17 00:23:16.228 [INFO][5273] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.68/26] block=192.168.127.64/26 handle="k8s-pod-network.fd0ccc5ba3336c8cd927145b94c1878e9d6c27584f56f52f6c7a4e0d1d46ed12" host="ip-172-31-29-247" Jan 17 00:23:16.277977 containerd[2105]: 2026-01-17 00:23:16.228 [INFO][5273] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.68/26] handle="k8s-pod-network.fd0ccc5ba3336c8cd927145b94c1878e9d6c27584f56f52f6c7a4e0d1d46ed12" host="ip-172-31-29-247" Jan 17 00:23:16.277977 containerd[2105]: 2026-01-17 00:23:16.228 [INFO][5273] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:16.277977 containerd[2105]: 2026-01-17 00:23:16.229 [INFO][5273] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.68/26] IPv6=[] ContainerID="fd0ccc5ba3336c8cd927145b94c1878e9d6c27584f56f52f6c7a4e0d1d46ed12" HandleID="k8s-pod-network.fd0ccc5ba3336c8cd927145b94c1878e9d6c27584f56f52f6c7a4e0d1d46ed12" Workload="ip--172--31--29--247-k8s-coredns--668d6bf9bc--tv59c-eth0" Jan 17 00:23:16.279800 containerd[2105]: 2026-01-17 00:23:16.233 [INFO][5248] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fd0ccc5ba3336c8cd927145b94c1878e9d6c27584f56f52f6c7a4e0d1d46ed12" Namespace="kube-system" Pod="coredns-668d6bf9bc-tv59c" WorkloadEndpoint="ip--172--31--29--247-k8s-coredns--668d6bf9bc--tv59c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--247-k8s-coredns--668d6bf9bc--tv59c-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ba1548f7-6605-4885-a26c-3f894994808a", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-247", ContainerID:"", Pod:"coredns-668d6bf9bc-tv59c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali97d0b4063b2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:16.279800 containerd[2105]: 2026-01-17 00:23:16.233 [INFO][5248] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.68/32] ContainerID="fd0ccc5ba3336c8cd927145b94c1878e9d6c27584f56f52f6c7a4e0d1d46ed12" Namespace="kube-system" Pod="coredns-668d6bf9bc-tv59c" WorkloadEndpoint="ip--172--31--29--247-k8s-coredns--668d6bf9bc--tv59c-eth0" Jan 17 00:23:16.279800 containerd[2105]: 2026-01-17 00:23:16.233 [INFO][5248] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali97d0b4063b2 ContainerID="fd0ccc5ba3336c8cd927145b94c1878e9d6c27584f56f52f6c7a4e0d1d46ed12" Namespace="kube-system" Pod="coredns-668d6bf9bc-tv59c" WorkloadEndpoint="ip--172--31--29--247-k8s-coredns--668d6bf9bc--tv59c-eth0" Jan 17 00:23:16.279800 containerd[2105]: 2026-01-17 00:23:16.240 [INFO][5248] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd0ccc5ba3336c8cd927145b94c1878e9d6c27584f56f52f6c7a4e0d1d46ed12" Namespace="kube-system" Pod="coredns-668d6bf9bc-tv59c" WorkloadEndpoint="ip--172--31--29--247-k8s-coredns--668d6bf9bc--tv59c-eth0" Jan 17 00:23:16.279800 containerd[2105]: 2026-01-17 00:23:16.240 [INFO][5248] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fd0ccc5ba3336c8cd927145b94c1878e9d6c27584f56f52f6c7a4e0d1d46ed12" Namespace="kube-system" Pod="coredns-668d6bf9bc-tv59c" WorkloadEndpoint="ip--172--31--29--247-k8s-coredns--668d6bf9bc--tv59c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--247-k8s-coredns--668d6bf9bc--tv59c-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ba1548f7-6605-4885-a26c-3f894994808a", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-247", ContainerID:"fd0ccc5ba3336c8cd927145b94c1878e9d6c27584f56f52f6c7a4e0d1d46ed12", Pod:"coredns-668d6bf9bc-tv59c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali97d0b4063b2", MAC:"16:52:6d:dc:2f:d1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:16.279800 containerd[2105]: 2026-01-17 00:23:16.262 [INFO][5248] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fd0ccc5ba3336c8cd927145b94c1878e9d6c27584f56f52f6c7a4e0d1d46ed12" Namespace="kube-system" Pod="coredns-668d6bf9bc-tv59c" WorkloadEndpoint="ip--172--31--29--247-k8s-coredns--668d6bf9bc--tv59c-eth0" Jan 17 00:23:16.335231 containerd[2105]: time="2026-01-17T00:23:16.334398587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:23:16.335231 containerd[2105]: time="2026-01-17T00:23:16.334492511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:23:16.335231 containerd[2105]: time="2026-01-17T00:23:16.334511508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:16.335231 containerd[2105]: time="2026-01-17T00:23:16.334641489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:16.408806 containerd[2105]: time="2026-01-17T00:23:16.408765956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8d9c5b87-7zrtb,Uid:2207401f-e738-47bd-8283-8eef3cbcb7c1,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6122deef238d4a59feb94fa3c79e7990eb69121d11e20992ae4d176ed6e2bc20\"" Jan 17 00:23:16.448168 containerd[2105]: time="2026-01-17T00:23:16.448131059Z" level=info msg="StopPodSandbox for \"c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580\"" Jan 17 00:23:16.452863 containerd[2105]: time="2026-01-17T00:23:16.452825450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tv59c,Uid:ba1548f7-6605-4885-a26c-3f894994808a,Namespace:kube-system,Attempt:1,} returns sandbox id \"fd0ccc5ba3336c8cd927145b94c1878e9d6c27584f56f52f6c7a4e0d1d46ed12\"" Jan 17 00:23:16.466972 containerd[2105]: time="2026-01-17T00:23:16.466602668Z" level=info msg="CreateContainer within sandbox \"fd0ccc5ba3336c8cd927145b94c1878e9d6c27584f56f52f6c7a4e0d1d46ed12\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:23:16.567940 containerd[2105]: time="2026-01-17T00:23:16.567120950Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:23:16.571105 containerd[2105]: time="2026-01-17T00:23:16.570444407Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:23:16.571105 containerd[2105]: time="2026-01-17T00:23:16.570610095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:23:16.575101 containerd[2105]: time="2026-01-17T00:23:16.572542529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:23:16.575213 kubelet[3360]: E0117 00:23:16.571867 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:23:16.575213 kubelet[3360]: E0117 00:23:16.571925 3360 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:23:16.580079 kubelet[3360]: E0117 00:23:16.579299 3360 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m8c5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-54bbb49cd4-pb4fm_calico-system(2c85088d-5853-486f-a2a6-a1b33d923ebd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:23:16.581970 kubelet[3360]: E0117 00:23:16.580916 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54bbb49cd4-pb4fm" podUID="2c85088d-5853-486f-a2a6-a1b33d923ebd" Jan 17 00:23:16.641160 containerd[2105]: time="2026-01-17T00:23:16.640991896Z" level=info msg="CreateContainer within sandbox \"fd0ccc5ba3336c8cd927145b94c1878e9d6c27584f56f52f6c7a4e0d1d46ed12\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fbd40dc2e45b4dbb47b82004fe60dfc856154ba437847ff517abb1d23bd2c6ea\"" Jan 17 00:23:16.644952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1102623483.mount: Deactivated successfully. Jan 17 00:23:16.654238 containerd[2105]: 2026-01-17 00:23:16.534 [INFO][5448] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" Jan 17 00:23:16.654238 containerd[2105]: 2026-01-17 00:23:16.535 [INFO][5448] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" iface="eth0" netns="/var/run/netns/cni-ffa09a47-0e7d-5337-ea2c-1ff370a7a92e" Jan 17 00:23:16.654238 containerd[2105]: 2026-01-17 00:23:16.537 [INFO][5448] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" iface="eth0" netns="/var/run/netns/cni-ffa09a47-0e7d-5337-ea2c-1ff370a7a92e" Jan 17 00:23:16.654238 containerd[2105]: 2026-01-17 00:23:16.537 [INFO][5448] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" iface="eth0" netns="/var/run/netns/cni-ffa09a47-0e7d-5337-ea2c-1ff370a7a92e" Jan 17 00:23:16.654238 containerd[2105]: 2026-01-17 00:23:16.537 [INFO][5448] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" Jan 17 00:23:16.654238 containerd[2105]: 2026-01-17 00:23:16.537 [INFO][5448] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" Jan 17 00:23:16.654238 containerd[2105]: 2026-01-17 00:23:16.595 [INFO][5456] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" HandleID="k8s-pod-network.c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" Workload="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--h9bhg-eth0" Jan 17 00:23:16.654238 containerd[2105]: 2026-01-17 00:23:16.597 [INFO][5456] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:16.654238 containerd[2105]: 2026-01-17 00:23:16.598 [INFO][5456] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:16.654238 containerd[2105]: 2026-01-17 00:23:16.613 [WARNING][5456] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" HandleID="k8s-pod-network.c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" Workload="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--h9bhg-eth0" Jan 17 00:23:16.654238 containerd[2105]: 2026-01-17 00:23:16.613 [INFO][5456] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" HandleID="k8s-pod-network.c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" Workload="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--h9bhg-eth0" Jan 17 00:23:16.654238 containerd[2105]: 2026-01-17 00:23:16.619 [INFO][5456] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:16.654238 containerd[2105]: 2026-01-17 00:23:16.638 [INFO][5448] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" Jan 17 00:23:16.653893 systemd[1]: run-netns-cni\x2dffa09a47\x2d0e7d\x2d5337\x2dea2c\x2d1ff370a7a92e.mount: Deactivated successfully. Jan 17 00:23:16.663826 containerd[2105]: time="2026-01-17T00:23:16.648035969Z" level=info msg="StartContainer for \"fbd40dc2e45b4dbb47b82004fe60dfc856154ba437847ff517abb1d23bd2c6ea\"" Jan 17 00:23:16.663826 containerd[2105]: time="2026-01-17T00:23:16.648493417Z" level=info msg="TearDown network for sandbox \"c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580\" successfully" Jan 17 00:23:16.663826 containerd[2105]: time="2026-01-17T00:23:16.659245134Z" level=info msg="StopPodSandbox for \"c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580\" returns successfully" Jan 17 00:23:16.663826 containerd[2105]: time="2026-01-17T00:23:16.661312027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8d9c5b87-h9bhg,Uid:19297f6f-5ccc-4eab-996b-36acef548d9c,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:23:16.825409 containerd[2105]: time="2026-01-17T00:23:16.824593124Z" level=info msg="StartContainer for \"fbd40dc2e45b4dbb47b82004fe60dfc856154ba437847ff517abb1d23bd2c6ea\" returns successfully" Jan 17 00:23:16.914856 containerd[2105]: time="2026-01-17T00:23:16.914772475Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:23:16.918877 containerd[2105]: time="2026-01-17T00:23:16.918721042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:23:16.918877 containerd[2105]: time="2026-01-17T00:23:16.918815484Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:23:16.919175 kubelet[3360]: E0117 00:23:16.919135 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:23:16.919416 kubelet[3360]: E0117 00:23:16.919190 3360 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:23:16.919416 kubelet[3360]: E0117 00:23:16.919372 3360 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gh4hl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d8d9c5b87-7zrtb_calico-apiserver(2207401f-e738-47bd-8283-8eef3cbcb7c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:23:16.924723 kubelet[3360]: E0117 00:23:16.923116 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d9c5b87-7zrtb" podUID="2207401f-e738-47bd-8283-8eef3cbcb7c1" Jan 17 00:23:16.926145 systemd-networkd[1659]: calicb6f3bc7082: Link UP Jan 17 00:23:16.926853 systemd-networkd[1659]: calicb6f3bc7082: Gained carrier Jan 17 00:23:16.928810 systemd-networkd[1659]: cali13f0be5e8e8: Gained IPv6LL Jan 17 00:23:16.954683 containerd[2105]: 2026-01-17 00:23:16.806 [INFO][5481] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--h9bhg-eth0 calico-apiserver-d8d9c5b87- calico-apiserver 19297f6f-5ccc-4eab-996b-36acef548d9c 932 0 2026-01-17 00:22:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d8d9c5b87 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-29-247 calico-apiserver-d8d9c5b87-h9bhg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicb6f3bc7082 [] [] }} ContainerID="10d132318367dfa28f6e718c95ce5ff13ef72cabb9103cc3cde704ef26465e0e" Namespace="calico-apiserver" Pod="calico-apiserver-d8d9c5b87-h9bhg" WorkloadEndpoint="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--h9bhg-" Jan 17 00:23:16.954683 containerd[2105]: 2026-01-17 00:23:16.807 [INFO][5481] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="10d132318367dfa28f6e718c95ce5ff13ef72cabb9103cc3cde704ef26465e0e" Namespace="calico-apiserver" Pod="calico-apiserver-d8d9c5b87-h9bhg" WorkloadEndpoint="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--h9bhg-eth0" Jan 17 00:23:16.954683 containerd[2105]: 2026-01-17 00:23:16.854 [INFO][5506] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10d132318367dfa28f6e718c95ce5ff13ef72cabb9103cc3cde704ef26465e0e" HandleID="k8s-pod-network.10d132318367dfa28f6e718c95ce5ff13ef72cabb9103cc3cde704ef26465e0e" Workload="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--h9bhg-eth0" Jan 17 00:23:16.954683 containerd[2105]: 2026-01-17 00:23:16.854 [INFO][5506] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="10d132318367dfa28f6e718c95ce5ff13ef72cabb9103cc3cde704ef26465e0e" HandleID="k8s-pod-network.10d132318367dfa28f6e718c95ce5ff13ef72cabb9103cc3cde704ef26465e0e" Workload="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--h9bhg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fd30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-29-247", "pod":"calico-apiserver-d8d9c5b87-h9bhg", "timestamp":"2026-01-17 00:23:16.854162629 +0000 UTC"}, Hostname:"ip-172-31-29-247", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:23:16.954683 containerd[2105]: 2026-01-17 00:23:16.854 [INFO][5506] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:16.954683 containerd[2105]: 2026-01-17 00:23:16.854 [INFO][5506] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:16.954683 containerd[2105]: 2026-01-17 00:23:16.854 [INFO][5506] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-247' Jan 17 00:23:16.954683 containerd[2105]: 2026-01-17 00:23:16.864 [INFO][5506] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.10d132318367dfa28f6e718c95ce5ff13ef72cabb9103cc3cde704ef26465e0e" host="ip-172-31-29-247" Jan 17 00:23:16.954683 containerd[2105]: 2026-01-17 00:23:16.872 [INFO][5506] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-247" Jan 17 00:23:16.954683 containerd[2105]: 2026-01-17 00:23:16.880 [INFO][5506] ipam/ipam.go 511: Trying affinity for 192.168.127.64/26 host="ip-172-31-29-247" Jan 17 00:23:16.954683 containerd[2105]: 2026-01-17 00:23:16.883 [INFO][5506] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.64/26 host="ip-172-31-29-247" Jan 17 00:23:16.954683 containerd[2105]: 2026-01-17 00:23:16.887 [INFO][5506] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.64/26 host="ip-172-31-29-247" Jan 17 00:23:16.954683 containerd[2105]: 2026-01-17 00:23:16.887 [INFO][5506] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.64/26 handle="k8s-pod-network.10d132318367dfa28f6e718c95ce5ff13ef72cabb9103cc3cde704ef26465e0e" host="ip-172-31-29-247" Jan 17 00:23:16.954683 containerd[2105]: 2026-01-17 00:23:16.890 [INFO][5506] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.10d132318367dfa28f6e718c95ce5ff13ef72cabb9103cc3cde704ef26465e0e Jan 17 00:23:16.954683 containerd[2105]: 2026-01-17 00:23:16.900 [INFO][5506] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.64/26 handle="k8s-pod-network.10d132318367dfa28f6e718c95ce5ff13ef72cabb9103cc3cde704ef26465e0e" host="ip-172-31-29-247" Jan 17 00:23:16.954683 containerd[2105]: 2026-01-17 00:23:16.909 [INFO][5506] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.69/26] block=192.168.127.64/26 handle="k8s-pod-network.10d132318367dfa28f6e718c95ce5ff13ef72cabb9103cc3cde704ef26465e0e" host="ip-172-31-29-247" Jan 17 00:23:16.954683 containerd[2105]: 2026-01-17 00:23:16.909 [INFO][5506] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.69/26] handle="k8s-pod-network.10d132318367dfa28f6e718c95ce5ff13ef72cabb9103cc3cde704ef26465e0e" host="ip-172-31-29-247" Jan 17 00:23:16.954683 containerd[2105]: 2026-01-17 00:23:16.909 [INFO][5506] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:16.954683 containerd[2105]: 2026-01-17 00:23:16.909 [INFO][5506] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.69/26] IPv6=[] ContainerID="10d132318367dfa28f6e718c95ce5ff13ef72cabb9103cc3cde704ef26465e0e" HandleID="k8s-pod-network.10d132318367dfa28f6e718c95ce5ff13ef72cabb9103cc3cde704ef26465e0e" Workload="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--h9bhg-eth0" Jan 17 00:23:16.955910 containerd[2105]: 2026-01-17 00:23:16.914 [INFO][5481] cni-plugin/k8s.go 418: Populated endpoint ContainerID="10d132318367dfa28f6e718c95ce5ff13ef72cabb9103cc3cde704ef26465e0e" Namespace="calico-apiserver" Pod="calico-apiserver-d8d9c5b87-h9bhg" WorkloadEndpoint="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--h9bhg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--h9bhg-eth0", GenerateName:"calico-apiserver-d8d9c5b87-", Namespace:"calico-apiserver", SelfLink:"", UID:"19297f6f-5ccc-4eab-996b-36acef548d9c", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d8d9c5b87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-247", ContainerID:"", Pod:"calico-apiserver-d8d9c5b87-h9bhg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicb6f3bc7082", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:16.955910 containerd[2105]: 2026-01-17 00:23:16.915 [INFO][5481] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.69/32] ContainerID="10d132318367dfa28f6e718c95ce5ff13ef72cabb9103cc3cde704ef26465e0e" Namespace="calico-apiserver" Pod="calico-apiserver-d8d9c5b87-h9bhg" WorkloadEndpoint="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--h9bhg-eth0" Jan 17 00:23:16.955910 containerd[2105]: 2026-01-17 00:23:16.915 [INFO][5481] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb6f3bc7082 ContainerID="10d132318367dfa28f6e718c95ce5ff13ef72cabb9103cc3cde704ef26465e0e" Namespace="calico-apiserver" Pod="calico-apiserver-d8d9c5b87-h9bhg" WorkloadEndpoint="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--h9bhg-eth0" Jan 17 00:23:16.955910 containerd[2105]: 2026-01-17 00:23:16.926 [INFO][5481] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10d132318367dfa28f6e718c95ce5ff13ef72cabb9103cc3cde704ef26465e0e" Namespace="calico-apiserver" Pod="calico-apiserver-d8d9c5b87-h9bhg" WorkloadEndpoint="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--h9bhg-eth0" Jan 17 00:23:16.955910 containerd[2105]: 2026-01-17 00:23:16.927 [INFO][5481] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="10d132318367dfa28f6e718c95ce5ff13ef72cabb9103cc3cde704ef26465e0e" Namespace="calico-apiserver" Pod="calico-apiserver-d8d9c5b87-h9bhg" WorkloadEndpoint="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--h9bhg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--h9bhg-eth0", GenerateName:"calico-apiserver-d8d9c5b87-", Namespace:"calico-apiserver", SelfLink:"", UID:"19297f6f-5ccc-4eab-996b-36acef548d9c", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d8d9c5b87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-247", ContainerID:"10d132318367dfa28f6e718c95ce5ff13ef72cabb9103cc3cde704ef26465e0e", Pod:"calico-apiserver-d8d9c5b87-h9bhg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicb6f3bc7082", MAC:"3e:8a:e3:03:b4:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:16.955910 containerd[2105]: 2026-01-17 00:23:16.945 [INFO][5481] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="10d132318367dfa28f6e718c95ce5ff13ef72cabb9103cc3cde704ef26465e0e" Namespace="calico-apiserver" Pod="calico-apiserver-d8d9c5b87-h9bhg" WorkloadEndpoint="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--h9bhg-eth0" Jan 17 00:23:17.000931 containerd[2105]: time="2026-01-17T00:23:16.999739429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:23:17.000931 containerd[2105]: time="2026-01-17T00:23:16.999826392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:23:17.000931 containerd[2105]: time="2026-01-17T00:23:16.999865592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:17.000931 containerd[2105]: time="2026-01-17T00:23:17.000413446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:17.062080 kubelet[3360]: E0117 00:23:17.061481 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54bbb49cd4-pb4fm" podUID="2c85088d-5853-486f-a2a6-a1b33d923ebd" Jan 17 00:23:17.106098 kubelet[3360]: E0117 00:23:17.104850 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d9c5b87-7zrtb" podUID="2207401f-e738-47bd-8283-8eef3cbcb7c1" Jan 17 00:23:17.108621 kubelet[3360]: E0117 00:23:17.106937 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64d946f8bb-fs6r2" podUID="6e903c9f-05d5-45fd-9d78-2d7516aa0977" Jan 17 00:23:17.199493 kubelet[3360]: I0117 00:23:17.199431 3360 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-tv59c" podStartSLOduration=42.199408595 podStartE2EDuration="42.199408595s" podCreationTimestamp="2026-01-17 00:22:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:23:17.134467598 +0000 UTC m=+46.883107724" watchObservedRunningTime="2026-01-17 00:23:17.199408595 +0000 UTC m=+46.948048720" Jan 17 00:23:17.206801 containerd[2105]: time="2026-01-17T00:23:17.206450588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d8d9c5b87-h9bhg,Uid:19297f6f-5ccc-4eab-996b-36acef548d9c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"10d132318367dfa28f6e718c95ce5ff13ef72cabb9103cc3cde704ef26465e0e\"" Jan 17 00:23:17.210905 containerd[2105]: time="2026-01-17T00:23:17.210654663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:23:17.437806 containerd[2105]: time="2026-01-17T00:23:17.436303210Z" level=info msg="StopPodSandbox for \"fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d\"" Jan 17 00:23:17.482790 containerd[2105]: time="2026-01-17T00:23:17.482712825Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:23:17.484809 containerd[2105]: time="2026-01-17T00:23:17.484734169Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:23:17.486031 kubelet[3360]: E0117 00:23:17.485921 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:23:17.486167 kubelet[3360]: E0117 00:23:17.486097 3360 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:23:17.486648 containerd[2105]: time="2026-01-17T00:23:17.484893770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:23:17.486806 kubelet[3360]: E0117 00:23:17.486480 3360 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xxc9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d8d9c5b87-h9bhg_calico-apiserver(19297f6f-5ccc-4eab-996b-36acef548d9c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:23:17.488197 kubelet[3360]: E0117 00:23:17.488163 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d9c5b87-h9bhg" podUID="19297f6f-5ccc-4eab-996b-36acef548d9c" Jan 17 00:23:17.631840 containerd[2105]: 2026-01-17 00:23:17.547 [INFO][5580] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" Jan 17 00:23:17.631840 containerd[2105]: 2026-01-17 00:23:17.551 [INFO][5580] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" iface="eth0" netns="/var/run/netns/cni-c4573bf4-3059-828f-8695-fd6df909ece8" Jan 17 00:23:17.631840 containerd[2105]: 2026-01-17 00:23:17.555 [INFO][5580] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" iface="eth0" netns="/var/run/netns/cni-c4573bf4-3059-828f-8695-fd6df909ece8" Jan 17 00:23:17.631840 containerd[2105]: 2026-01-17 00:23:17.556 [INFO][5580] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" iface="eth0" netns="/var/run/netns/cni-c4573bf4-3059-828f-8695-fd6df909ece8" Jan 17 00:23:17.631840 containerd[2105]: 2026-01-17 00:23:17.556 [INFO][5580] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" Jan 17 00:23:17.631840 containerd[2105]: 2026-01-17 00:23:17.556 [INFO][5580] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" Jan 17 00:23:17.631840 containerd[2105]: 2026-01-17 00:23:17.618 [INFO][5589] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" HandleID="k8s-pod-network.fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" Workload="ip--172--31--29--247-k8s-csi--node--driver--hbb8z-eth0" Jan 17 00:23:17.631840 containerd[2105]: 2026-01-17 00:23:17.618 [INFO][5589] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:17.631840 containerd[2105]: 2026-01-17 00:23:17.618 [INFO][5589] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:17.631840 containerd[2105]: 2026-01-17 00:23:17.625 [WARNING][5589] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" HandleID="k8s-pod-network.fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" Workload="ip--172--31--29--247-k8s-csi--node--driver--hbb8z-eth0" Jan 17 00:23:17.631840 containerd[2105]: 2026-01-17 00:23:17.625 [INFO][5589] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" HandleID="k8s-pod-network.fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" Workload="ip--172--31--29--247-k8s-csi--node--driver--hbb8z-eth0" Jan 17 00:23:17.631840 containerd[2105]: 2026-01-17 00:23:17.627 [INFO][5589] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:17.631840 containerd[2105]: 2026-01-17 00:23:17.629 [INFO][5580] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" Jan 17 00:23:17.635423 containerd[2105]: time="2026-01-17T00:23:17.631987580Z" level=info msg="TearDown network for sandbox \"fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d\" successfully" Jan 17 00:23:17.635423 containerd[2105]: time="2026-01-17T00:23:17.632015836Z" level=info msg="StopPodSandbox for \"fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d\" returns successfully" Jan 17 00:23:17.635423 containerd[2105]: time="2026-01-17T00:23:17.632768935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hbb8z,Uid:d7198563-8b4e-4b52-ad88-2f9e6d09e79c,Namespace:calico-system,Attempt:1,}" Jan 17 00:23:17.636170 systemd[1]: run-netns-cni\x2dc4573bf4\x2d3059\x2d828f\x2d8695\x2dfd6df909ece8.mount: Deactivated successfully. Jan 17 00:23:17.806608 systemd-networkd[1659]: calif36c14020d7: Link UP Jan 17 00:23:17.806771 systemd-networkd[1659]: calif36c14020d7: Gained carrier Jan 17 00:23:17.826464 systemd-networkd[1659]: cali033deb2ecb2: Gained IPv6LL Jan 17 00:23:17.826847 systemd-networkd[1659]: cali97d0b4063b2: Gained IPv6LL Jan 17 00:23:17.837354 containerd[2105]: 2026-01-17 00:23:17.709 [INFO][5598] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--247-k8s-csi--node--driver--hbb8z-eth0 csi-node-driver- calico-system d7198563-8b4e-4b52-ad88-2f9e6d09e79c 969 0 2026-01-17 00:22:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-29-247 csi-node-driver-hbb8z eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif36c14020d7 [] [] }} ContainerID="3448c9e74aad229ec2941c23fa797b226eca03aadad36593888e6c6c00bbc96b" Namespace="calico-system" Pod="csi-node-driver-hbb8z" WorkloadEndpoint="ip--172--31--29--247-k8s-csi--node--driver--hbb8z-" Jan 17 00:23:17.837354 containerd[2105]: 2026-01-17 00:23:17.709 [INFO][5598] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3448c9e74aad229ec2941c23fa797b226eca03aadad36593888e6c6c00bbc96b" Namespace="calico-system" Pod="csi-node-driver-hbb8z" WorkloadEndpoint="ip--172--31--29--247-k8s-csi--node--driver--hbb8z-eth0" Jan 17 00:23:17.837354 containerd[2105]: 2026-01-17 00:23:17.752 [INFO][5609] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3448c9e74aad229ec2941c23fa797b226eca03aadad36593888e6c6c00bbc96b" HandleID="k8s-pod-network.3448c9e74aad229ec2941c23fa797b226eca03aadad36593888e6c6c00bbc96b" Workload="ip--172--31--29--247-k8s-csi--node--driver--hbb8z-eth0" Jan 17 00:23:17.837354 containerd[2105]: 2026-01-17 00:23:17.753 [INFO][5609] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3448c9e74aad229ec2941c23fa797b226eca03aadad36593888e6c6c00bbc96b" HandleID="k8s-pod-network.3448c9e74aad229ec2941c23fa797b226eca03aadad36593888e6c6c00bbc96b" Workload="ip--172--31--29--247-k8s-csi--node--driver--hbb8z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-247", "pod":"csi-node-driver-hbb8z", "timestamp":"2026-01-17 00:23:17.752686677 +0000 UTC"}, Hostname:"ip-172-31-29-247", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:23:17.837354 containerd[2105]: 2026-01-17 00:23:17.753 [INFO][5609] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:17.837354 containerd[2105]: 2026-01-17 00:23:17.753 [INFO][5609] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:17.837354 containerd[2105]: 2026-01-17 00:23:17.753 [INFO][5609] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-247' Jan 17 00:23:17.837354 containerd[2105]: 2026-01-17 00:23:17.764 [INFO][5609] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3448c9e74aad229ec2941c23fa797b226eca03aadad36593888e6c6c00bbc96b" host="ip-172-31-29-247" Jan 17 00:23:17.837354 containerd[2105]: 2026-01-17 00:23:17.769 [INFO][5609] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-247" Jan 17 00:23:17.837354 containerd[2105]: 2026-01-17 00:23:17.774 [INFO][5609] ipam/ipam.go 511: Trying affinity for 192.168.127.64/26 host="ip-172-31-29-247" Jan 17 00:23:17.837354 containerd[2105]: 2026-01-17 00:23:17.777 [INFO][5609] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.64/26 host="ip-172-31-29-247" Jan 17 00:23:17.837354 containerd[2105]: 2026-01-17 00:23:17.781 [INFO][5609] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.64/26 host="ip-172-31-29-247" Jan 17 00:23:17.837354 containerd[2105]: 2026-01-17 00:23:17.782 [INFO][5609] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.64/26 handle="k8s-pod-network.3448c9e74aad229ec2941c23fa797b226eca03aadad36593888e6c6c00bbc96b" host="ip-172-31-29-247" Jan 17 00:23:17.837354 containerd[2105]: 2026-01-17 00:23:17.783 [INFO][5609] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3448c9e74aad229ec2941c23fa797b226eca03aadad36593888e6c6c00bbc96b Jan 17 00:23:17.837354 containerd[2105]: 2026-01-17 00:23:17.791 [INFO][5609] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.64/26 handle="k8s-pod-network.3448c9e74aad229ec2941c23fa797b226eca03aadad36593888e6c6c00bbc96b" host="ip-172-31-29-247" Jan 17 00:23:17.837354 containerd[2105]: 2026-01-17 00:23:17.799 [INFO][5609] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.70/26] block=192.168.127.64/26 handle="k8s-pod-network.3448c9e74aad229ec2941c23fa797b226eca03aadad36593888e6c6c00bbc96b" host="ip-172-31-29-247" Jan 17 00:23:17.837354 containerd[2105]: 2026-01-17 00:23:17.799 [INFO][5609] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.70/26] handle="k8s-pod-network.3448c9e74aad229ec2941c23fa797b226eca03aadad36593888e6c6c00bbc96b" host="ip-172-31-29-247" Jan 17 00:23:17.837354 containerd[2105]: 2026-01-17 00:23:17.799 [INFO][5609] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:17.837354 containerd[2105]: 2026-01-17 00:23:17.799 [INFO][5609] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.70/26] IPv6=[] ContainerID="3448c9e74aad229ec2941c23fa797b226eca03aadad36593888e6c6c00bbc96b" HandleID="k8s-pod-network.3448c9e74aad229ec2941c23fa797b226eca03aadad36593888e6c6c00bbc96b" Workload="ip--172--31--29--247-k8s-csi--node--driver--hbb8z-eth0" Jan 17 00:23:17.840567 containerd[2105]: 2026-01-17 00:23:17.803 [INFO][5598] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3448c9e74aad229ec2941c23fa797b226eca03aadad36593888e6c6c00bbc96b" Namespace="calico-system" Pod="csi-node-driver-hbb8z" WorkloadEndpoint="ip--172--31--29--247-k8s-csi--node--driver--hbb8z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--247-k8s-csi--node--driver--hbb8z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d7198563-8b4e-4b52-ad88-2f9e6d09e79c", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-247", ContainerID:"", Pod:"csi-node-driver-hbb8z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.127.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif36c14020d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:17.840567 containerd[2105]: 2026-01-17 00:23:17.803 [INFO][5598] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.70/32] ContainerID="3448c9e74aad229ec2941c23fa797b226eca03aadad36593888e6c6c00bbc96b" Namespace="calico-system" Pod="csi-node-driver-hbb8z" WorkloadEndpoint="ip--172--31--29--247-k8s-csi--node--driver--hbb8z-eth0" Jan 17 00:23:17.840567 containerd[2105]: 2026-01-17 00:23:17.803 [INFO][5598] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif36c14020d7 ContainerID="3448c9e74aad229ec2941c23fa797b226eca03aadad36593888e6c6c00bbc96b" Namespace="calico-system" Pod="csi-node-driver-hbb8z" WorkloadEndpoint="ip--172--31--29--247-k8s-csi--node--driver--hbb8z-eth0" Jan 17 00:23:17.840567 containerd[2105]: 2026-01-17 00:23:17.805 [INFO][5598] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3448c9e74aad229ec2941c23fa797b226eca03aadad36593888e6c6c00bbc96b" Namespace="calico-system" Pod="csi-node-driver-hbb8z" WorkloadEndpoint="ip--172--31--29--247-k8s-csi--node--driver--hbb8z-eth0" Jan 17 00:23:17.840567 containerd[2105]: 2026-01-17 00:23:17.806 [INFO][5598] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3448c9e74aad229ec2941c23fa797b226eca03aadad36593888e6c6c00bbc96b" Namespace="calico-system" Pod="csi-node-driver-hbb8z" WorkloadEndpoint="ip--172--31--29--247-k8s-csi--node--driver--hbb8z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--247-k8s-csi--node--driver--hbb8z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d7198563-8b4e-4b52-ad88-2f9e6d09e79c", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-247", ContainerID:"3448c9e74aad229ec2941c23fa797b226eca03aadad36593888e6c6c00bbc96b", Pod:"csi-node-driver-hbb8z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.127.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif36c14020d7", MAC:"82:87:2b:f0:1c:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:17.840567 containerd[2105]: 2026-01-17 00:23:17.829 [INFO][5598] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3448c9e74aad229ec2941c23fa797b226eca03aadad36593888e6c6c00bbc96b" Namespace="calico-system" Pod="csi-node-driver-hbb8z" WorkloadEndpoint="ip--172--31--29--247-k8s-csi--node--driver--hbb8z-eth0" Jan 17 00:23:17.873501 containerd[2105]: time="2026-01-17T00:23:17.873384865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:23:17.874502 containerd[2105]: time="2026-01-17T00:23:17.874072580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:23:17.879066 containerd[2105]: time="2026-01-17T00:23:17.875651522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:17.880273 containerd[2105]: time="2026-01-17T00:23:17.879742653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:17.974751 containerd[2105]: time="2026-01-17T00:23:17.974678919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hbb8z,Uid:d7198563-8b4e-4b52-ad88-2f9e6d09e79c,Namespace:calico-system,Attempt:1,} returns sandbox id \"3448c9e74aad229ec2941c23fa797b226eca03aadad36593888e6c6c00bbc96b\"" Jan 17 00:23:17.978949 containerd[2105]: time="2026-01-17T00:23:17.978688622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:23:18.014918 systemd-networkd[1659]: cali2f87eb0c668: Gained IPv6LL Jan 17 00:23:18.015364 systemd-networkd[1659]: calicb6f3bc7082: Gained IPv6LL Jan 17 00:23:18.109420 kubelet[3360]: E0117 00:23:18.109015 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d9c5b87-h9bhg" podUID="19297f6f-5ccc-4eab-996b-36acef548d9c" Jan 17 00:23:18.119761 kubelet[3360]: E0117 00:23:18.119439 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54bbb49cd4-pb4fm" podUID="2c85088d-5853-486f-a2a6-a1b33d923ebd" Jan 17 00:23:18.119761 kubelet[3360]: E0117 00:23:18.119647 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d9c5b87-7zrtb" podUID="2207401f-e738-47bd-8283-8eef3cbcb7c1" Jan 17 00:23:18.266068 containerd[2105]: time="2026-01-17T00:23:18.265486503Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:23:18.267963 containerd[2105]: time="2026-01-17T00:23:18.267799395Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:23:18.268124 containerd[2105]: time="2026-01-17T00:23:18.267870138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:23:18.269085 kubelet[3360]: E0117 00:23:18.268448 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:23:18.269085 kubelet[3360]: E0117 00:23:18.268505 3360 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:23:18.269085 kubelet[3360]: E0117 00:23:18.268664 3360 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tlttd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hbb8z_calico-system(d7198563-8b4e-4b52-ad88-2f9e6d09e79c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:23:18.272158 containerd[2105]: time="2026-01-17T00:23:18.271984197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:23:18.413488 systemd[1]: Started sshd@7-172.31.29.247:22-4.153.228.146:44956.service - OpenSSH per-connection server daemon (4.153.228.146:44956). Jan 17 00:23:18.436831 containerd[2105]: time="2026-01-17T00:23:18.435399142Z" level=info msg="StopPodSandbox for \"48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518\"" Jan 17 00:23:18.576734 containerd[2105]: 2026-01-17 00:23:18.520 [INFO][5685] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" Jan 17 00:23:18.576734 containerd[2105]: 2026-01-17 00:23:18.520 [INFO][5685] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" iface="eth0" netns="/var/run/netns/cni-92e44255-42dd-0257-c2f4-cf6a7ad87a35" Jan 17 00:23:18.576734 containerd[2105]: 2026-01-17 00:23:18.521 [INFO][5685] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" iface="eth0" netns="/var/run/netns/cni-92e44255-42dd-0257-c2f4-cf6a7ad87a35" Jan 17 00:23:18.576734 containerd[2105]: 2026-01-17 00:23:18.521 [INFO][5685] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" iface="eth0" netns="/var/run/netns/cni-92e44255-42dd-0257-c2f4-cf6a7ad87a35" Jan 17 00:23:18.576734 containerd[2105]: 2026-01-17 00:23:18.521 [INFO][5685] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" Jan 17 00:23:18.576734 containerd[2105]: 2026-01-17 00:23:18.521 [INFO][5685] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" Jan 17 00:23:18.576734 containerd[2105]: 2026-01-17 00:23:18.559 [INFO][5693] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" HandleID="k8s-pod-network.48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" Workload="ip--172--31--29--247-k8s-coredns--668d6bf9bc--qm2t4-eth0" Jan 17 00:23:18.576734 containerd[2105]: 2026-01-17 00:23:18.559 [INFO][5693] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:18.576734 containerd[2105]: 2026-01-17 00:23:18.559 [INFO][5693] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:18.576734 containerd[2105]: 2026-01-17 00:23:18.568 [WARNING][5693] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" HandleID="k8s-pod-network.48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" Workload="ip--172--31--29--247-k8s-coredns--668d6bf9bc--qm2t4-eth0" Jan 17 00:23:18.576734 containerd[2105]: 2026-01-17 00:23:18.569 [INFO][5693] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" HandleID="k8s-pod-network.48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" Workload="ip--172--31--29--247-k8s-coredns--668d6bf9bc--qm2t4-eth0" Jan 17 00:23:18.576734 containerd[2105]: 2026-01-17 00:23:18.571 [INFO][5693] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:18.576734 containerd[2105]: 2026-01-17 00:23:18.574 [INFO][5685] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" Jan 17 00:23:18.577477 containerd[2105]: time="2026-01-17T00:23:18.577450235Z" level=info msg="TearDown network for sandbox \"48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518\" successfully" Jan 17 00:23:18.577571 containerd[2105]: time="2026-01-17T00:23:18.577552051Z" level=info msg="StopPodSandbox for \"48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518\" returns successfully" Jan 17 00:23:18.578998 containerd[2105]: time="2026-01-17T00:23:18.578653781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qm2t4,Uid:c635936a-f4da-49b1-a5f7-daacf10da049,Namespace:kube-system,Attempt:1,}" Jan 17 00:23:18.603510 systemd[1]: run-netns-cni\x2d92e44255\x2d42dd\x2d0257\x2dc2f4\x2dcf6a7ad87a35.mount: Deactivated successfully. Jan 17 00:23:18.711361 containerd[2105]: time="2026-01-17T00:23:18.711182476Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:23:18.714202 containerd[2105]: time="2026-01-17T00:23:18.713831697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:23:18.714202 containerd[2105]: time="2026-01-17T00:23:18.713837360Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:23:18.714411 kubelet[3360]: E0117 00:23:18.714205 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:23:18.714411 kubelet[3360]: E0117 00:23:18.714314 3360 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:23:18.714779 kubelet[3360]: E0117 00:23:18.714518 3360 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tlttd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hbb8z_calico-system(d7198563-8b4e-4b52-ad88-2f9e6d09e79c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:23:18.716577 kubelet[3360]: E0117 00:23:18.715917 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hbb8z" podUID="d7198563-8b4e-4b52-ad88-2f9e6d09e79c" Jan 17 00:23:18.787339 systemd-networkd[1659]: cali4283b6259fd: Link UP Jan 17 00:23:18.789351 systemd-networkd[1659]: cali4283b6259fd: Gained carrier Jan 17 00:23:18.818986 containerd[2105]: 2026-01-17 00:23:18.668 [INFO][5700] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--247-k8s-coredns--668d6bf9bc--qm2t4-eth0 coredns-668d6bf9bc- kube-system c635936a-f4da-49b1-a5f7-daacf10da049 1023 0 2026-01-17 00:22:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-247 coredns-668d6bf9bc-qm2t4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4283b6259fd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="816497aec2e9b7dd84fd3e89fe335f3c94941fb8a32993573306d7b37e5ec874" Namespace="kube-system" Pod="coredns-668d6bf9bc-qm2t4" WorkloadEndpoint="ip--172--31--29--247-k8s-coredns--668d6bf9bc--qm2t4-" Jan 17 00:23:18.818986 containerd[2105]: 2026-01-17 00:23:18.668 [INFO][5700] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="816497aec2e9b7dd84fd3e89fe335f3c94941fb8a32993573306d7b37e5ec874" Namespace="kube-system" Pod="coredns-668d6bf9bc-qm2t4" WorkloadEndpoint="ip--172--31--29--247-k8s-coredns--668d6bf9bc--qm2t4-eth0" Jan 17 00:23:18.818986 containerd[2105]: 2026-01-17 00:23:18.707 [INFO][5711] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="816497aec2e9b7dd84fd3e89fe335f3c94941fb8a32993573306d7b37e5ec874" HandleID="k8s-pod-network.816497aec2e9b7dd84fd3e89fe335f3c94941fb8a32993573306d7b37e5ec874" Workload="ip--172--31--29--247-k8s-coredns--668d6bf9bc--qm2t4-eth0" Jan 17 00:23:18.818986 containerd[2105]: 2026-01-17 00:23:18.707 [INFO][5711] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="816497aec2e9b7dd84fd3e89fe335f3c94941fb8a32993573306d7b37e5ec874" HandleID="k8s-pod-network.816497aec2e9b7dd84fd3e89fe335f3c94941fb8a32993573306d7b37e5ec874" Workload="ip--172--31--29--247-k8s-coredns--668d6bf9bc--qm2t4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-247", "pod":"coredns-668d6bf9bc-qm2t4", "timestamp":"2026-01-17 00:23:18.707633947 +0000 UTC"}, Hostname:"ip-172-31-29-247", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:23:18.818986 containerd[2105]: 2026-01-17 00:23:18.707 [INFO][5711] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:18.818986 containerd[2105]: 2026-01-17 00:23:18.707 [INFO][5711] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:18.818986 containerd[2105]: 2026-01-17 00:23:18.708 [INFO][5711] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-247' Jan 17 00:23:18.818986 containerd[2105]: 2026-01-17 00:23:18.718 [INFO][5711] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.816497aec2e9b7dd84fd3e89fe335f3c94941fb8a32993573306d7b37e5ec874" host="ip-172-31-29-247" Jan 17 00:23:18.818986 containerd[2105]: 2026-01-17 00:23:18.728 [INFO][5711] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-247" Jan 17 00:23:18.818986 containerd[2105]: 2026-01-17 00:23:18.738 [INFO][5711] ipam/ipam.go 511: Trying affinity for 192.168.127.64/26 host="ip-172-31-29-247" Jan 17 00:23:18.818986 containerd[2105]: 2026-01-17 00:23:18.745 [INFO][5711] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.64/26 host="ip-172-31-29-247" Jan 17 00:23:18.818986 containerd[2105]: 2026-01-17 00:23:18.749 [INFO][5711] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.64/26 host="ip-172-31-29-247" Jan 17 00:23:18.818986 containerd[2105]: 2026-01-17 00:23:18.750 [INFO][5711] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.64/26 handle="k8s-pod-network.816497aec2e9b7dd84fd3e89fe335f3c94941fb8a32993573306d7b37e5ec874" host="ip-172-31-29-247" Jan 17 00:23:18.818986 containerd[2105]: 2026-01-17 00:23:18.754 [INFO][5711] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.816497aec2e9b7dd84fd3e89fe335f3c94941fb8a32993573306d7b37e5ec874 Jan 17 00:23:18.818986 containerd[2105]: 2026-01-17 00:23:18.762 [INFO][5711] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.64/26 handle="k8s-pod-network.816497aec2e9b7dd84fd3e89fe335f3c94941fb8a32993573306d7b37e5ec874" host="ip-172-31-29-247" Jan 17 00:23:18.818986 containerd[2105]: 2026-01-17 00:23:18.773 [INFO][5711] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.71/26] block=192.168.127.64/26 handle="k8s-pod-network.816497aec2e9b7dd84fd3e89fe335f3c94941fb8a32993573306d7b37e5ec874" host="ip-172-31-29-247" Jan 17 00:23:18.818986 containerd[2105]: 2026-01-17 00:23:18.773 [INFO][5711] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.71/26] handle="k8s-pod-network.816497aec2e9b7dd84fd3e89fe335f3c94941fb8a32993573306d7b37e5ec874" host="ip-172-31-29-247" Jan 17 00:23:18.818986 containerd[2105]: 2026-01-17 00:23:18.773 [INFO][5711] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:18.818986 containerd[2105]: 2026-01-17 00:23:18.773 [INFO][5711] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.71/26] IPv6=[] ContainerID="816497aec2e9b7dd84fd3e89fe335f3c94941fb8a32993573306d7b37e5ec874" HandleID="k8s-pod-network.816497aec2e9b7dd84fd3e89fe335f3c94941fb8a32993573306d7b37e5ec874" Workload="ip--172--31--29--247-k8s-coredns--668d6bf9bc--qm2t4-eth0" Jan 17 00:23:18.819992 containerd[2105]: 2026-01-17 00:23:18.776 [INFO][5700] cni-plugin/k8s.go 418: Populated endpoint ContainerID="816497aec2e9b7dd84fd3e89fe335f3c94941fb8a32993573306d7b37e5ec874" Namespace="kube-system" Pod="coredns-668d6bf9bc-qm2t4" WorkloadEndpoint="ip--172--31--29--247-k8s-coredns--668d6bf9bc--qm2t4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--247-k8s-coredns--668d6bf9bc--qm2t4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c635936a-f4da-49b1-a5f7-daacf10da049", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-247", ContainerID:"", Pod:"coredns-668d6bf9bc-qm2t4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4283b6259fd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:18.819992 containerd[2105]: 2026-01-17 00:23:18.776 [INFO][5700] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.71/32] ContainerID="816497aec2e9b7dd84fd3e89fe335f3c94941fb8a32993573306d7b37e5ec874" Namespace="kube-system" Pod="coredns-668d6bf9bc-qm2t4" WorkloadEndpoint="ip--172--31--29--247-k8s-coredns--668d6bf9bc--qm2t4-eth0" Jan 17 00:23:18.819992 containerd[2105]: 2026-01-17 00:23:18.776 [INFO][5700] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4283b6259fd ContainerID="816497aec2e9b7dd84fd3e89fe335f3c94941fb8a32993573306d7b37e5ec874" Namespace="kube-system" Pod="coredns-668d6bf9bc-qm2t4" WorkloadEndpoint="ip--172--31--29--247-k8s-coredns--668d6bf9bc--qm2t4-eth0" Jan 17 00:23:18.819992 containerd[2105]: 2026-01-17 00:23:18.788 [INFO][5700] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="816497aec2e9b7dd84fd3e89fe335f3c94941fb8a32993573306d7b37e5ec874" Namespace="kube-system" Pod="coredns-668d6bf9bc-qm2t4" WorkloadEndpoint="ip--172--31--29--247-k8s-coredns--668d6bf9bc--qm2t4-eth0" Jan 17 00:23:18.819992 containerd[2105]: 2026-01-17 00:23:18.790 [INFO][5700] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="816497aec2e9b7dd84fd3e89fe335f3c94941fb8a32993573306d7b37e5ec874" Namespace="kube-system" Pod="coredns-668d6bf9bc-qm2t4" WorkloadEndpoint="ip--172--31--29--247-k8s-coredns--668d6bf9bc--qm2t4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--247-k8s-coredns--668d6bf9bc--qm2t4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c635936a-f4da-49b1-a5f7-daacf10da049", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-247", ContainerID:"816497aec2e9b7dd84fd3e89fe335f3c94941fb8a32993573306d7b37e5ec874", Pod:"coredns-668d6bf9bc-qm2t4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4283b6259fd", MAC:"8e:4f:39:92:1d:22", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:18.819992 containerd[2105]: 2026-01-17 00:23:18.814 [INFO][5700] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="816497aec2e9b7dd84fd3e89fe335f3c94941fb8a32993573306d7b37e5ec874" Namespace="kube-system" Pod="coredns-668d6bf9bc-qm2t4" WorkloadEndpoint="ip--172--31--29--247-k8s-coredns--668d6bf9bc--qm2t4-eth0" Jan 17 00:23:18.869523 containerd[2105]: time="2026-01-17T00:23:18.869080678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:23:18.869523 containerd[2105]: time="2026-01-17T00:23:18.869310600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:23:18.869523 containerd[2105]: time="2026-01-17T00:23:18.869391722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:18.870812 containerd[2105]: time="2026-01-17T00:23:18.869809458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:18.983816 sshd[5672]: Accepted publickey for core from 4.153.228.146 port 44956 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:23:18.990898 sshd[5672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:18.992366 containerd[2105]: time="2026-01-17T00:23:18.991286642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qm2t4,Uid:c635936a-f4da-49b1-a5f7-daacf10da049,Namespace:kube-system,Attempt:1,} returns sandbox id \"816497aec2e9b7dd84fd3e89fe335f3c94941fb8a32993573306d7b37e5ec874\"" Jan 17 00:23:19.010418 containerd[2105]: time="2026-01-17T00:23:19.010353114Z" level=info msg="CreateContainer within sandbox \"816497aec2e9b7dd84fd3e89fe335f3c94941fb8a32993573306d7b37e5ec874\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:23:19.059263 systemd-logind[2080]: New session 8 of user core. Jan 17 00:23:19.077496 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:23:19.085282 containerd[2105]: time="2026-01-17T00:23:19.084871098Z" level=info msg="CreateContainer within sandbox \"816497aec2e9b7dd84fd3e89fe335f3c94941fb8a32993573306d7b37e5ec874\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aa94da481161347be37a1b96c66d3cec058def11f5d4840951a1795138d735d9\"" Jan 17 00:23:19.088125 containerd[2105]: time="2026-01-17T00:23:19.087809056Z" level=info msg="StartContainer for \"aa94da481161347be37a1b96c66d3cec058def11f5d4840951a1795138d735d9\"" Jan 17 00:23:19.169243 kubelet[3360]: E0117 00:23:19.167290 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hbb8z" podUID="d7198563-8b4e-4b52-ad88-2f9e6d09e79c" Jan 17 00:23:19.169243 kubelet[3360]: E0117 00:23:19.166801 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d9c5b87-h9bhg" podUID="19297f6f-5ccc-4eab-996b-36acef548d9c" Jan 17 00:23:19.256222 containerd[2105]: time="2026-01-17T00:23:19.256096940Z" level=info msg="StartContainer for \"aa94da481161347be37a1b96c66d3cec058def11f5d4840951a1795138d735d9\" returns successfully" Jan 17 00:23:19.439729 containerd[2105]: time="2026-01-17T00:23:19.439670985Z" level=info msg="StopPodSandbox for \"36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195\"" Jan 17 00:23:19.687331 containerd[2105]: 2026-01-17 00:23:19.558 [INFO][5814] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" Jan 17 00:23:19.687331 containerd[2105]: 2026-01-17 00:23:19.564 [INFO][5814] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" iface="eth0" netns="/var/run/netns/cni-465bb577-506b-c182-16ed-43ce02fb4a25" Jan 17 00:23:19.687331 containerd[2105]: 2026-01-17 00:23:19.570 [INFO][5814] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" iface="eth0" netns="/var/run/netns/cni-465bb577-506b-c182-16ed-43ce02fb4a25" Jan 17 00:23:19.687331 containerd[2105]: 2026-01-17 00:23:19.572 [INFO][5814] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" iface="eth0" netns="/var/run/netns/cni-465bb577-506b-c182-16ed-43ce02fb4a25" Jan 17 00:23:19.687331 containerd[2105]: 2026-01-17 00:23:19.572 [INFO][5814] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" Jan 17 00:23:19.687331 containerd[2105]: 2026-01-17 00:23:19.572 [INFO][5814] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" Jan 17 00:23:19.687331 containerd[2105]: 2026-01-17 00:23:19.649 [INFO][5824] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" HandleID="k8s-pod-network.36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" Workload="ip--172--31--29--247-k8s-goldmane--666569f655--22ww4-eth0" Jan 17 00:23:19.687331 containerd[2105]: 2026-01-17 00:23:19.649 [INFO][5824] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:19.687331 containerd[2105]: 2026-01-17 00:23:19.649 [INFO][5824] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:19.687331 containerd[2105]: 2026-01-17 00:23:19.671 [WARNING][5824] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" HandleID="k8s-pod-network.36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" Workload="ip--172--31--29--247-k8s-goldmane--666569f655--22ww4-eth0" Jan 17 00:23:19.687331 containerd[2105]: 2026-01-17 00:23:19.672 [INFO][5824] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" HandleID="k8s-pod-network.36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" Workload="ip--172--31--29--247-k8s-goldmane--666569f655--22ww4-eth0" Jan 17 00:23:19.687331 containerd[2105]: 2026-01-17 00:23:19.677 [INFO][5824] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:19.687331 containerd[2105]: 2026-01-17 00:23:19.683 [INFO][5814] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" Jan 17 00:23:19.691066 containerd[2105]: time="2026-01-17T00:23:19.691010448Z" level=info msg="TearDown network for sandbox \"36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195\" successfully" Jan 17 00:23:19.691373 containerd[2105]: time="2026-01-17T00:23:19.691074268Z" level=info msg="StopPodSandbox for \"36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195\" returns successfully" Jan 17 00:23:19.693984 containerd[2105]: time="2026-01-17T00:23:19.693808039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-22ww4,Uid:804c4956-a77e-4057-9db7-9d50191156a3,Namespace:calico-system,Attempt:1,}" Jan 17 00:23:19.696997 systemd[1]: run-netns-cni\x2d465bb577\x2d506b\x2dc182\x2d16ed\x2d43ce02fb4a25.mount: Deactivated successfully. Jan 17 00:23:19.745103 systemd-networkd[1659]: calif36c14020d7: Gained IPv6LL Jan 17 00:23:19.977463 systemd-networkd[1659]: calid0583f371a6: Link UP Jan 17 00:23:19.980478 systemd-networkd[1659]: calid0583f371a6: Gained carrier Jan 17 00:23:20.057831 containerd[2105]: 2026-01-17 00:23:19.807 [INFO][5835] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--247-k8s-goldmane--666569f655--22ww4-eth0 goldmane-666569f655- calico-system 804c4956-a77e-4057-9db7-9d50191156a3 1049 0 2026-01-17 00:22:49 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-29-247 goldmane-666569f655-22ww4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid0583f371a6 [] [] }} ContainerID="a0f5083f695ddbddf2021561d2f17d9df2e7428e90277fe222e5a39e919e1bd6" Namespace="calico-system" Pod="goldmane-666569f655-22ww4" WorkloadEndpoint="ip--172--31--29--247-k8s-goldmane--666569f655--22ww4-" Jan 17 00:23:20.057831 containerd[2105]: 2026-01-17 00:23:19.808 [INFO][5835] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a0f5083f695ddbddf2021561d2f17d9df2e7428e90277fe222e5a39e919e1bd6" Namespace="calico-system" Pod="goldmane-666569f655-22ww4" WorkloadEndpoint="ip--172--31--29--247-k8s-goldmane--666569f655--22ww4-eth0" Jan 17 00:23:20.057831 containerd[2105]: 2026-01-17 00:23:19.891 [INFO][5851] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a0f5083f695ddbddf2021561d2f17d9df2e7428e90277fe222e5a39e919e1bd6" HandleID="k8s-pod-network.a0f5083f695ddbddf2021561d2f17d9df2e7428e90277fe222e5a39e919e1bd6" Workload="ip--172--31--29--247-k8s-goldmane--666569f655--22ww4-eth0" Jan 17 00:23:20.057831 containerd[2105]: 2026-01-17 00:23:19.891 [INFO][5851] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a0f5083f695ddbddf2021561d2f17d9df2e7428e90277fe222e5a39e919e1bd6" HandleID="k8s-pod-network.a0f5083f695ddbddf2021561d2f17d9df2e7428e90277fe222e5a39e919e1bd6" Workload="ip--172--31--29--247-k8s-goldmane--666569f655--22ww4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f700), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-247", "pod":"goldmane-666569f655-22ww4", "timestamp":"2026-01-17 00:23:19.891252299 +0000 UTC"}, Hostname:"ip-172-31-29-247", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:23:20.057831 containerd[2105]: 2026-01-17 00:23:19.891 [INFO][5851] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:20.057831 containerd[2105]: 2026-01-17 00:23:19.891 [INFO][5851] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:20.057831 containerd[2105]: 2026-01-17 00:23:19.891 [INFO][5851] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-247' Jan 17 00:23:20.057831 containerd[2105]: 2026-01-17 00:23:19.902 [INFO][5851] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a0f5083f695ddbddf2021561d2f17d9df2e7428e90277fe222e5a39e919e1bd6" host="ip-172-31-29-247" Jan 17 00:23:20.057831 containerd[2105]: 2026-01-17 00:23:19.912 [INFO][5851] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-247" Jan 17 00:23:20.057831 containerd[2105]: 2026-01-17 00:23:19.919 [INFO][5851] ipam/ipam.go 511: Trying affinity for 192.168.127.64/26 host="ip-172-31-29-247" Jan 17 00:23:20.057831 containerd[2105]: 2026-01-17 00:23:19.922 [INFO][5851] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.64/26 host="ip-172-31-29-247" Jan 17 00:23:20.057831 containerd[2105]: 2026-01-17 00:23:19.925 [INFO][5851] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.64/26 host="ip-172-31-29-247" Jan 17 00:23:20.057831 containerd[2105]: 2026-01-17 00:23:19.925 [INFO][5851] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.64/26 handle="k8s-pod-network.a0f5083f695ddbddf2021561d2f17d9df2e7428e90277fe222e5a39e919e1bd6" host="ip-172-31-29-247" Jan 17 00:23:20.057831 containerd[2105]: 2026-01-17 00:23:19.927 [INFO][5851] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a0f5083f695ddbddf2021561d2f17d9df2e7428e90277fe222e5a39e919e1bd6 Jan 17 00:23:20.057831 containerd[2105]: 2026-01-17 00:23:19.942 [INFO][5851] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.64/26 handle="k8s-pod-network.a0f5083f695ddbddf2021561d2f17d9df2e7428e90277fe222e5a39e919e1bd6" host="ip-172-31-29-247" Jan 17 00:23:20.057831 containerd[2105]: 2026-01-17 00:23:19.959 [INFO][5851] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.72/26] block=192.168.127.64/26 handle="k8s-pod-network.a0f5083f695ddbddf2021561d2f17d9df2e7428e90277fe222e5a39e919e1bd6" host="ip-172-31-29-247" Jan 17 00:23:20.057831 containerd[2105]: 2026-01-17 00:23:19.960 [INFO][5851] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.72/26] handle="k8s-pod-network.a0f5083f695ddbddf2021561d2f17d9df2e7428e90277fe222e5a39e919e1bd6" host="ip-172-31-29-247" Jan 17 00:23:20.057831 containerd[2105]: 2026-01-17 00:23:19.960 [INFO][5851] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:20.057831 containerd[2105]: 2026-01-17 00:23:19.960 [INFO][5851] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.72/26] IPv6=[] ContainerID="a0f5083f695ddbddf2021561d2f17d9df2e7428e90277fe222e5a39e919e1bd6" HandleID="k8s-pod-network.a0f5083f695ddbddf2021561d2f17d9df2e7428e90277fe222e5a39e919e1bd6" Workload="ip--172--31--29--247-k8s-goldmane--666569f655--22ww4-eth0" Jan 17 00:23:20.062439 containerd[2105]: 2026-01-17 00:23:19.967 [INFO][5835] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a0f5083f695ddbddf2021561d2f17d9df2e7428e90277fe222e5a39e919e1bd6" Namespace="calico-system" Pod="goldmane-666569f655-22ww4" WorkloadEndpoint="ip--172--31--29--247-k8s-goldmane--666569f655--22ww4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--247-k8s-goldmane--666569f655--22ww4-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"804c4956-a77e-4057-9db7-9d50191156a3", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-247", ContainerID:"", Pod:"goldmane-666569f655-22ww4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.127.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid0583f371a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:20.062439 containerd[2105]: 2026-01-17 00:23:19.967 [INFO][5835] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.72/32] ContainerID="a0f5083f695ddbddf2021561d2f17d9df2e7428e90277fe222e5a39e919e1bd6" Namespace="calico-system" Pod="goldmane-666569f655-22ww4" WorkloadEndpoint="ip--172--31--29--247-k8s-goldmane--666569f655--22ww4-eth0" Jan 17 00:23:20.062439 containerd[2105]: 2026-01-17 00:23:19.967 [INFO][5835] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid0583f371a6 ContainerID="a0f5083f695ddbddf2021561d2f17d9df2e7428e90277fe222e5a39e919e1bd6" Namespace="calico-system" Pod="goldmane-666569f655-22ww4" WorkloadEndpoint="ip--172--31--29--247-k8s-goldmane--666569f655--22ww4-eth0" Jan 17 00:23:20.062439 containerd[2105]: 2026-01-17 00:23:19.980 [INFO][5835] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a0f5083f695ddbddf2021561d2f17d9df2e7428e90277fe222e5a39e919e1bd6" Namespace="calico-system" Pod="goldmane-666569f655-22ww4" WorkloadEndpoint="ip--172--31--29--247-k8s-goldmane--666569f655--22ww4-eth0" Jan 17 00:23:20.062439 containerd[2105]: 2026-01-17 00:23:19.984 [INFO][5835] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a0f5083f695ddbddf2021561d2f17d9df2e7428e90277fe222e5a39e919e1bd6" Namespace="calico-system" Pod="goldmane-666569f655-22ww4" WorkloadEndpoint="ip--172--31--29--247-k8s-goldmane--666569f655--22ww4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--247-k8s-goldmane--666569f655--22ww4-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"804c4956-a77e-4057-9db7-9d50191156a3", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-247", ContainerID:"a0f5083f695ddbddf2021561d2f17d9df2e7428e90277fe222e5a39e919e1bd6", Pod:"goldmane-666569f655-22ww4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.127.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid0583f371a6", MAC:"2a:39:c2:83:c7:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:20.062439 containerd[2105]: 2026-01-17 00:23:20.036 [INFO][5835] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a0f5083f695ddbddf2021561d2f17d9df2e7428e90277fe222e5a39e919e1bd6" Namespace="calico-system" Pod="goldmane-666569f655-22ww4" WorkloadEndpoint="ip--172--31--29--247-k8s-goldmane--666569f655--22ww4-eth0" Jan 17 00:23:20.131027 containerd[2105]: time="2026-01-17T00:23:20.128923362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:23:20.131027 containerd[2105]: time="2026-01-17T00:23:20.129290075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:23:20.131027 containerd[2105]: time="2026-01-17T00:23:20.129466723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:20.132119 containerd[2105]: time="2026-01-17T00:23:20.131123003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:20.219946 kubelet[3360]: I0117 00:23:20.218253 3360 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qm2t4" podStartSLOduration=45.218225705 podStartE2EDuration="45.218225705s" podCreationTimestamp="2026-01-17 00:22:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:23:20.215362396 +0000 UTC m=+49.964002524" watchObservedRunningTime="2026-01-17 00:23:20.218225705 +0000 UTC m=+49.966865831" Jan 17 00:23:20.425252 containerd[2105]: time="2026-01-17T00:23:20.424762034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-22ww4,Uid:804c4956-a77e-4057-9db7-9d50191156a3,Namespace:calico-system,Attempt:1,} returns sandbox id \"a0f5083f695ddbddf2021561d2f17d9df2e7428e90277fe222e5a39e919e1bd6\"" Jan 17 00:23:20.437215 containerd[2105]: time="2026-01-17T00:23:20.436446186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:23:20.573765 sshd[5672]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:20.579651 systemd[1]: sshd@7-172.31.29.247:22-4.153.228.146:44956.service: Deactivated successfully. Jan 17 00:23:20.584934 systemd-logind[2080]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:23:20.587728 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:23:20.589093 systemd-logind[2080]: Removed session 8. Jan 17 00:23:20.638495 systemd-networkd[1659]: cali4283b6259fd: Gained IPv6LL Jan 17 00:23:20.752167 containerd[2105]: time="2026-01-17T00:23:20.751839728Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:23:20.754392 containerd[2105]: time="2026-01-17T00:23:20.754031825Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:23:20.754392 containerd[2105]: time="2026-01-17T00:23:20.754256297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:23:20.754868 kubelet[3360]: E0117 00:23:20.754807 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:23:20.754868 kubelet[3360]: E0117 00:23:20.754864 3360 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:23:20.755413 kubelet[3360]: E0117 00:23:20.755070 3360 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-27bk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-22ww4_calico-system(804c4956-a77e-4057-9db7-9d50191156a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:23:20.756584 kubelet[3360]: E0117 00:23:20.756508 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-22ww4" podUID="804c4956-a77e-4057-9db7-9d50191156a3" Jan 17 00:23:21.150164 systemd-resolved[1998]: Under memory pressure, flushing caches. Jan 17 00:23:21.154358 systemd-journald[1577]: Under memory pressure, flushing caches. Jan 17 00:23:21.150187 systemd-resolved[1998]: Flushed all caches. Jan 17 00:23:21.202414 kubelet[3360]: E0117 00:23:21.201774 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-22ww4" podUID="804c4956-a77e-4057-9db7-9d50191156a3" Jan 17 00:23:21.855287 systemd-networkd[1659]: calid0583f371a6: Gained IPv6LL Jan 17 00:23:22.203652 kubelet[3360]: E0117 00:23:22.203593 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-22ww4" podUID="804c4956-a77e-4057-9db7-9d50191156a3" Jan 17 00:23:24.151349 ntpd[2066]: Listen normally on 6 vxlan.calico 192.168.127.64:123 Jan 17 00:23:24.152603 ntpd[2066]: 17 Jan 00:23:24 ntpd[2066]: Listen normally on 6 vxlan.calico 192.168.127.64:123 Jan 17 00:23:24.152603 ntpd[2066]: 17 Jan 00:23:24 ntpd[2066]: Listen normally on 7 vxlan.calico [fe80::64ce:47ff:fef8:c9be%4]:123 Jan 17 00:23:24.152603 ntpd[2066]: 17 Jan 00:23:24 ntpd[2066]: Listen normally on 8 cali13f0be5e8e8 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 17 00:23:24.152603 ntpd[2066]: 17 Jan 00:23:24 ntpd[2066]: Listen normally on 9 cali033deb2ecb2 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 17 00:23:24.152603 ntpd[2066]: 17 Jan 00:23:24 ntpd[2066]: Listen normally on 10 cali2f87eb0c668 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 17 00:23:24.152603 ntpd[2066]: 17 Jan 00:23:24 ntpd[2066]: Listen normally on 11 cali97d0b4063b2 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 17 00:23:24.152603 ntpd[2066]: 17 Jan 00:23:24 ntpd[2066]: Listen normally on 12 calicb6f3bc7082 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 17 00:23:24.152603 ntpd[2066]: 17 Jan 00:23:24 ntpd[2066]: Listen normally on 13 calif36c14020d7 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 17 00:23:24.152603 ntpd[2066]: 17 Jan 00:23:24 ntpd[2066]: Listen normally on 14 cali4283b6259fd [fe80::ecee:eeff:feee:eeee%13]:123 Jan 17 00:23:24.152603 ntpd[2066]: 17 Jan 00:23:24 ntpd[2066]: Listen normally on 15 calid0583f371a6 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 17 00:23:24.151473 ntpd[2066]: Listen normally on 7 vxlan.calico [fe80::64ce:47ff:fef8:c9be%4]:123 Jan 17 00:23:24.151575 ntpd[2066]: Listen normally on 8 cali13f0be5e8e8 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 17 00:23:24.151622 ntpd[2066]: Listen normally on 9 cali033deb2ecb2 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 17 00:23:24.151683 ntpd[2066]: Listen normally on 10 cali2f87eb0c668 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 17 00:23:24.151744 ntpd[2066]: Listen normally on 11 cali97d0b4063b2 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 17 00:23:24.151787 ntpd[2066]: Listen normally on 12 calicb6f3bc7082 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 17 00:23:24.151847 ntpd[2066]: Listen normally on 13 calif36c14020d7 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 17 00:23:24.151896 ntpd[2066]: Listen normally on 14 cali4283b6259fd [fe80::ecee:eeff:feee:eeee%13]:123 Jan 17 00:23:24.151936 ntpd[2066]: Listen normally on 15 calid0583f371a6 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 17 00:23:25.668335 systemd[1]: Started sshd@8-172.31.29.247:22-4.153.228.146:60242.service - OpenSSH per-connection server daemon (4.153.228.146:60242). Jan 17 00:23:26.191267 sshd[5932]: Accepted publickey for core from 4.153.228.146 port 60242 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:23:26.192712 sshd[5932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:26.198419 systemd-logind[2080]: New session 9 of user core. Jan 17 00:23:26.204487 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:23:26.673728 sshd[5932]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:26.676674 systemd[1]: sshd@8-172.31.29.247:22-4.153.228.146:60242.service: Deactivated successfully. Jan 17 00:23:26.681094 systemd-logind[2080]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:23:26.681988 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:23:26.683668 systemd-logind[2080]: Removed session 9. Jan 17 00:23:30.481892 containerd[2105]: time="2026-01-17T00:23:30.481067737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:23:30.488310 containerd[2105]: time="2026-01-17T00:23:30.487935730Z" level=info msg="StopPodSandbox for \"c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580\"" Jan 17 00:23:30.604442 containerd[2105]: 2026-01-17 00:23:30.552 [WARNING][5964] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--h9bhg-eth0", GenerateName:"calico-apiserver-d8d9c5b87-", Namespace:"calico-apiserver", SelfLink:"", UID:"19297f6f-5ccc-4eab-996b-36acef548d9c", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d8d9c5b87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-247", ContainerID:"10d132318367dfa28f6e718c95ce5ff13ef72cabb9103cc3cde704ef26465e0e", Pod:"calico-apiserver-d8d9c5b87-h9bhg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicb6f3bc7082", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:30.604442 containerd[2105]: 2026-01-17 00:23:30.553 [INFO][5964] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" Jan 17 00:23:30.604442 containerd[2105]: 2026-01-17 00:23:30.553 [INFO][5964] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" iface="eth0" netns="" Jan 17 00:23:30.604442 containerd[2105]: 2026-01-17 00:23:30.553 [INFO][5964] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" Jan 17 00:23:30.604442 containerd[2105]: 2026-01-17 00:23:30.553 [INFO][5964] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" Jan 17 00:23:30.604442 containerd[2105]: 2026-01-17 00:23:30.589 [INFO][5971] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" HandleID="k8s-pod-network.c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" Workload="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--h9bhg-eth0" Jan 17 00:23:30.604442 containerd[2105]: 2026-01-17 00:23:30.591 [INFO][5971] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:30.604442 containerd[2105]: 2026-01-17 00:23:30.591 [INFO][5971] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:30.604442 containerd[2105]: 2026-01-17 00:23:30.598 [WARNING][5971] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" HandleID="k8s-pod-network.c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" Workload="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--h9bhg-eth0" Jan 17 00:23:30.604442 containerd[2105]: 2026-01-17 00:23:30.599 [INFO][5971] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" HandleID="k8s-pod-network.c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" Workload="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--h9bhg-eth0" Jan 17 00:23:30.604442 containerd[2105]: 2026-01-17 00:23:30.600 [INFO][5971] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:30.604442 containerd[2105]: 2026-01-17 00:23:30.602 [INFO][5964] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" Jan 17 00:23:30.605808 containerd[2105]: time="2026-01-17T00:23:30.604516494Z" level=info msg="TearDown network for sandbox \"c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580\" successfully" Jan 17 00:23:30.605808 containerd[2105]: time="2026-01-17T00:23:30.604543392Z" level=info msg="StopPodSandbox for \"c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580\" returns successfully" Jan 17 00:23:30.605808 containerd[2105]: time="2026-01-17T00:23:30.605313774Z" level=info msg="RemovePodSandbox for \"c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580\"" Jan 17 00:23:30.605808 containerd[2105]: time="2026-01-17T00:23:30.605469926Z" level=info msg="Forcibly stopping sandbox \"c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580\"" Jan 17 00:23:30.689082 containerd[2105]: 2026-01-17 00:23:30.649 [WARNING][5985] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--h9bhg-eth0", GenerateName:"calico-apiserver-d8d9c5b87-", Namespace:"calico-apiserver", SelfLink:"", UID:"19297f6f-5ccc-4eab-996b-36acef548d9c", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d8d9c5b87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-247", ContainerID:"10d132318367dfa28f6e718c95ce5ff13ef72cabb9103cc3cde704ef26465e0e", Pod:"calico-apiserver-d8d9c5b87-h9bhg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicb6f3bc7082", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:30.689082 containerd[2105]: 2026-01-17 00:23:30.649 [INFO][5985] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" Jan 17 00:23:30.689082 containerd[2105]: 2026-01-17 00:23:30.649 [INFO][5985] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" iface="eth0" netns="" Jan 17 00:23:30.689082 containerd[2105]: 2026-01-17 00:23:30.650 [INFO][5985] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" Jan 17 00:23:30.689082 containerd[2105]: 2026-01-17 00:23:30.650 [INFO][5985] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" Jan 17 00:23:30.689082 containerd[2105]: 2026-01-17 00:23:30.675 [INFO][5993] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" HandleID="k8s-pod-network.c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" Workload="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--h9bhg-eth0" Jan 17 00:23:30.689082 containerd[2105]: 2026-01-17 00:23:30.675 [INFO][5993] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:30.689082 containerd[2105]: 2026-01-17 00:23:30.675 [INFO][5993] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:30.689082 containerd[2105]: 2026-01-17 00:23:30.682 [WARNING][5993] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" HandleID="k8s-pod-network.c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" Workload="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--h9bhg-eth0" Jan 17 00:23:30.689082 containerd[2105]: 2026-01-17 00:23:30.682 [INFO][5993] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" HandleID="k8s-pod-network.c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" Workload="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--h9bhg-eth0" Jan 17 00:23:30.689082 containerd[2105]: 2026-01-17 00:23:30.684 [INFO][5993] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:30.689082 containerd[2105]: 2026-01-17 00:23:30.686 [INFO][5985] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580" Jan 17 00:23:30.689082 containerd[2105]: time="2026-01-17T00:23:30.688407886Z" level=info msg="TearDown network for sandbox \"c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580\" successfully" Jan 17 00:23:30.699706 containerd[2105]: time="2026-01-17T00:23:30.699633427Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:23:30.699842 containerd[2105]: time="2026-01-17T00:23:30.699734587Z" level=info msg="RemovePodSandbox \"c5b888a2b683b8092ddae8dcfcfc9910f5d32d420abd0bdfe46189350677c580\" returns successfully" Jan 17 00:23:30.700287 containerd[2105]: time="2026-01-17T00:23:30.700264524Z" level=info msg="StopPodSandbox for \"d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4\"" Jan 17 00:23:30.780886 containerd[2105]: 2026-01-17 00:23:30.743 [WARNING][6007] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" WorkloadEndpoint="ip--172--31--29--247-k8s-whisker--5bbb6f8cc6--kdvhn-eth0" Jan 17 00:23:30.780886 containerd[2105]: 2026-01-17 00:23:30.743 [INFO][6007] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" Jan 17 00:23:30.780886 containerd[2105]: 2026-01-17 00:23:30.743 [INFO][6007] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" iface="eth0" netns="" Jan 17 00:23:30.780886 containerd[2105]: 2026-01-17 00:23:30.743 [INFO][6007] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" Jan 17 00:23:30.780886 containerd[2105]: 2026-01-17 00:23:30.743 [INFO][6007] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" Jan 17 00:23:30.780886 containerd[2105]: 2026-01-17 00:23:30.768 [INFO][6014] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" HandleID="k8s-pod-network.d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" Workload="ip--172--31--29--247-k8s-whisker--5bbb6f8cc6--kdvhn-eth0" Jan 17 00:23:30.780886 containerd[2105]: 2026-01-17 00:23:30.768 [INFO][6014] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:30.780886 containerd[2105]: 2026-01-17 00:23:30.768 [INFO][6014] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:30.780886 containerd[2105]: 2026-01-17 00:23:30.774 [WARNING][6014] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" HandleID="k8s-pod-network.d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" Workload="ip--172--31--29--247-k8s-whisker--5bbb6f8cc6--kdvhn-eth0" Jan 17 00:23:30.780886 containerd[2105]: 2026-01-17 00:23:30.774 [INFO][6014] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" HandleID="k8s-pod-network.d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" Workload="ip--172--31--29--247-k8s-whisker--5bbb6f8cc6--kdvhn-eth0" Jan 17 00:23:30.780886 containerd[2105]: 2026-01-17 00:23:30.776 [INFO][6014] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:30.780886 containerd[2105]: 2026-01-17 00:23:30.778 [INFO][6007] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" Jan 17 00:23:30.781708 containerd[2105]: time="2026-01-17T00:23:30.780886881Z" level=info msg="TearDown network for sandbox \"d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4\" successfully" Jan 17 00:23:30.781708 containerd[2105]: time="2026-01-17T00:23:30.780927940Z" level=info msg="StopPodSandbox for \"d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4\" returns successfully" Jan 17 00:23:30.783353 containerd[2105]: time="2026-01-17T00:23:30.783313591Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:23:30.783919 containerd[2105]: time="2026-01-17T00:23:30.783884950Z" level=info msg="RemovePodSandbox for \"d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4\"" Jan 17 00:23:30.784024 containerd[2105]: time="2026-01-17T00:23:30.783929320Z" level=info msg="Forcibly stopping sandbox \"d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4\"" Jan 17 00:23:30.787039 containerd[2105]: time="2026-01-17T00:23:30.786914259Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:23:30.789502 containerd[2105]: time="2026-01-17T00:23:30.787028476Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:23:30.789502 containerd[2105]: time="2026-01-17T00:23:30.788316042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:23:30.789650 kubelet[3360]: E0117 00:23:30.787211 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:23:30.789650 kubelet[3360]: E0117 00:23:30.787272 3360 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:23:30.789650 kubelet[3360]: E0117 00:23:30.787546 3360 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d6ba05377925445eb2d7612d02a08bcf,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jzjv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64d946f8bb-fs6r2_calico-system(6e903c9f-05d5-45fd-9d78-2d7516aa0977): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:23:30.874492 containerd[2105]: 2026-01-17 00:23:30.835 [WARNING][6028] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" WorkloadEndpoint="ip--172--31--29--247-k8s-whisker--5bbb6f8cc6--kdvhn-eth0" Jan 17 00:23:30.874492 containerd[2105]: 2026-01-17 00:23:30.835 [INFO][6028] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" Jan 17 00:23:30.874492 containerd[2105]: 2026-01-17 00:23:30.835 [INFO][6028] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" iface="eth0" netns="" Jan 17 00:23:30.874492 containerd[2105]: 2026-01-17 00:23:30.835 [INFO][6028] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" Jan 17 00:23:30.874492 containerd[2105]: 2026-01-17 00:23:30.835 [INFO][6028] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" Jan 17 00:23:30.874492 containerd[2105]: 2026-01-17 00:23:30.859 [INFO][6035] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" HandleID="k8s-pod-network.d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" Workload="ip--172--31--29--247-k8s-whisker--5bbb6f8cc6--kdvhn-eth0" Jan 17 00:23:30.874492 containerd[2105]: 2026-01-17 00:23:30.859 [INFO][6035] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:30.874492 containerd[2105]: 2026-01-17 00:23:30.859 [INFO][6035] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:30.874492 containerd[2105]: 2026-01-17 00:23:30.868 [WARNING][6035] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" HandleID="k8s-pod-network.d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" Workload="ip--172--31--29--247-k8s-whisker--5bbb6f8cc6--kdvhn-eth0" Jan 17 00:23:30.874492 containerd[2105]: 2026-01-17 00:23:30.868 [INFO][6035] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" HandleID="k8s-pod-network.d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" Workload="ip--172--31--29--247-k8s-whisker--5bbb6f8cc6--kdvhn-eth0" Jan 17 00:23:30.874492 containerd[2105]: 2026-01-17 00:23:30.870 [INFO][6035] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:30.874492 containerd[2105]: 2026-01-17 00:23:30.872 [INFO][6028] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4" Jan 17 00:23:30.875167 containerd[2105]: time="2026-01-17T00:23:30.875120229Z" level=info msg="TearDown network for sandbox \"d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4\" successfully" Jan 17 00:23:30.881643 containerd[2105]: time="2026-01-17T00:23:30.881600521Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:23:30.881860 containerd[2105]: time="2026-01-17T00:23:30.881822814Z" level=info msg="RemovePodSandbox \"d126b70d44281e94769cc3c2aa449aaea7a2f97834ca664fd1ae055e80bd99c4\" returns successfully" Jan 17 00:23:30.882323 containerd[2105]: time="2026-01-17T00:23:30.882298244Z" level=info msg="StopPodSandbox for \"36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195\"" Jan 17 00:23:30.964557 containerd[2105]: 2026-01-17 00:23:30.922 [WARNING][6049] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--247-k8s-goldmane--666569f655--22ww4-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"804c4956-a77e-4057-9db7-9d50191156a3", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-247", ContainerID:"a0f5083f695ddbddf2021561d2f17d9df2e7428e90277fe222e5a39e919e1bd6", Pod:"goldmane-666569f655-22ww4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.127.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid0583f371a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:30.964557 containerd[2105]: 2026-01-17 00:23:30.922 [INFO][6049] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" Jan 17 00:23:30.964557 containerd[2105]: 2026-01-17 00:23:30.923 [INFO][6049] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" iface="eth0" netns="" Jan 17 00:23:30.964557 containerd[2105]: 2026-01-17 00:23:30.923 [INFO][6049] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" Jan 17 00:23:30.964557 containerd[2105]: 2026-01-17 00:23:30.923 [INFO][6049] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" Jan 17 00:23:30.964557 containerd[2105]: 2026-01-17 00:23:30.949 [INFO][6057] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" HandleID="k8s-pod-network.36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" Workload="ip--172--31--29--247-k8s-goldmane--666569f655--22ww4-eth0" Jan 17 00:23:30.964557 containerd[2105]: 2026-01-17 00:23:30.949 [INFO][6057] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:30.964557 containerd[2105]: 2026-01-17 00:23:30.949 [INFO][6057] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:30.964557 containerd[2105]: 2026-01-17 00:23:30.957 [WARNING][6057] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" HandleID="k8s-pod-network.36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" Workload="ip--172--31--29--247-k8s-goldmane--666569f655--22ww4-eth0" Jan 17 00:23:30.964557 containerd[2105]: 2026-01-17 00:23:30.957 [INFO][6057] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" HandleID="k8s-pod-network.36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" Workload="ip--172--31--29--247-k8s-goldmane--666569f655--22ww4-eth0" Jan 17 00:23:30.964557 containerd[2105]: 2026-01-17 00:23:30.959 [INFO][6057] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:30.964557 containerd[2105]: 2026-01-17 00:23:30.962 [INFO][6049] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" Jan 17 00:23:30.964557 containerd[2105]: time="2026-01-17T00:23:30.964403917Z" level=info msg="TearDown network for sandbox \"36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195\" successfully" Jan 17 00:23:30.964557 containerd[2105]: time="2026-01-17T00:23:30.964442265Z" level=info msg="StopPodSandbox for \"36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195\" returns successfully" Jan 17 00:23:30.965588 containerd[2105]: time="2026-01-17T00:23:30.965256857Z" level=info msg="RemovePodSandbox for \"36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195\"" Jan 17 00:23:30.965588 containerd[2105]: time="2026-01-17T00:23:30.965293192Z" level=info msg="Forcibly stopping sandbox \"36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195\"" Jan 17 00:23:31.050283 containerd[2105]: 2026-01-17 00:23:31.010 [WARNING][6072] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--247-k8s-goldmane--666569f655--22ww4-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"804c4956-a77e-4057-9db7-9d50191156a3", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-247", ContainerID:"a0f5083f695ddbddf2021561d2f17d9df2e7428e90277fe222e5a39e919e1bd6", Pod:"goldmane-666569f655-22ww4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.127.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid0583f371a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:31.050283 containerd[2105]: 2026-01-17 00:23:31.011 [INFO][6072] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" Jan 17 00:23:31.050283 containerd[2105]: 2026-01-17 00:23:31.011 [INFO][6072] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" iface="eth0" netns="" Jan 17 00:23:31.050283 containerd[2105]: 2026-01-17 00:23:31.011 [INFO][6072] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" Jan 17 00:23:31.050283 containerd[2105]: 2026-01-17 00:23:31.011 [INFO][6072] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" Jan 17 00:23:31.050283 containerd[2105]: 2026-01-17 00:23:31.036 [INFO][6079] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" HandleID="k8s-pod-network.36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" Workload="ip--172--31--29--247-k8s-goldmane--666569f655--22ww4-eth0" Jan 17 00:23:31.050283 containerd[2105]: 2026-01-17 00:23:31.036 [INFO][6079] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:31.050283 containerd[2105]: 2026-01-17 00:23:31.036 [INFO][6079] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:31.050283 containerd[2105]: 2026-01-17 00:23:31.043 [WARNING][6079] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" HandleID="k8s-pod-network.36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" Workload="ip--172--31--29--247-k8s-goldmane--666569f655--22ww4-eth0" Jan 17 00:23:31.050283 containerd[2105]: 2026-01-17 00:23:31.043 [INFO][6079] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" HandleID="k8s-pod-network.36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" Workload="ip--172--31--29--247-k8s-goldmane--666569f655--22ww4-eth0" Jan 17 00:23:31.050283 containerd[2105]: 2026-01-17 00:23:31.045 [INFO][6079] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:31.050283 containerd[2105]: 2026-01-17 00:23:31.047 [INFO][6072] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195" Jan 17 00:23:31.050283 containerd[2105]: time="2026-01-17T00:23:31.050249115Z" level=info msg="TearDown network for sandbox \"36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195\" successfully" Jan 17 00:23:31.058155 containerd[2105]: time="2026-01-17T00:23:31.058083709Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:23:31.058155 containerd[2105]: time="2026-01-17T00:23:31.058152178Z" level=info msg="RemovePodSandbox \"36a3bc6358104a5bf1b055e3e2abdff80984ddfc4169b42a948d40778f752195\" returns successfully" Jan 17 00:23:31.058919 containerd[2105]: time="2026-01-17T00:23:31.058632180Z" level=info msg="StopPodSandbox for \"26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867\"" Jan 17 00:23:31.082548 containerd[2105]: time="2026-01-17T00:23:31.082501848Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:23:31.085099 containerd[2105]: time="2026-01-17T00:23:31.084784638Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:23:31.085731 kubelet[3360]: E0117 00:23:31.085561 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:23:31.085875 kubelet[3360]: E0117 00:23:31.085771 3360 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:23:31.086420 containerd[2105]: time="2026-01-17T00:23:31.085805818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:23:31.086503 kubelet[3360]: E0117 00:23:31.086404 3360 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m8c5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-54bbb49cd4-pb4fm_calico-system(2c85088d-5853-486f-a2a6-a1b33d923ebd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:23:31.089847 kubelet[3360]: E0117 00:23:31.089195 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54bbb49cd4-pb4fm" podUID="2c85088d-5853-486f-a2a6-a1b33d923ebd" Jan 17 00:23:31.091107 containerd[2105]: time="2026-01-17T00:23:31.090476166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:23:31.176037 containerd[2105]: 2026-01-17 00:23:31.110 [WARNING][6093] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--247-k8s-calico--kube--controllers--54bbb49cd4--pb4fm-eth0", GenerateName:"calico-kube-controllers-54bbb49cd4-", Namespace:"calico-system", SelfLink:"", UID:"2c85088d-5853-486f-a2a6-a1b33d923ebd", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54bbb49cd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-247", ContainerID:"9943749f05ee2fadf6f89ce6decafb30740631058c516ba758a579c407c9bb6b", Pod:"calico-kube-controllers-54bbb49cd4-pb4fm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.127.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali033deb2ecb2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:31.176037 containerd[2105]: 2026-01-17 00:23:31.110 [INFO][6093] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" Jan 17 00:23:31.176037 containerd[2105]: 2026-01-17 00:23:31.110 [INFO][6093] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" iface="eth0" netns="" Jan 17 00:23:31.176037 containerd[2105]: 2026-01-17 00:23:31.111 [INFO][6093] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" Jan 17 00:23:31.176037 containerd[2105]: 2026-01-17 00:23:31.111 [INFO][6093] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" Jan 17 00:23:31.176037 containerd[2105]: 2026-01-17 00:23:31.157 [INFO][6100] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" HandleID="k8s-pod-network.26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" Workload="ip--172--31--29--247-k8s-calico--kube--controllers--54bbb49cd4--pb4fm-eth0" Jan 17 00:23:31.176037 containerd[2105]: 2026-01-17 00:23:31.157 [INFO][6100] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:31.176037 containerd[2105]: 2026-01-17 00:23:31.157 [INFO][6100] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:31.176037 containerd[2105]: 2026-01-17 00:23:31.166 [WARNING][6100] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" HandleID="k8s-pod-network.26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" Workload="ip--172--31--29--247-k8s-calico--kube--controllers--54bbb49cd4--pb4fm-eth0" Jan 17 00:23:31.176037 containerd[2105]: 2026-01-17 00:23:31.166 [INFO][6100] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" HandleID="k8s-pod-network.26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" Workload="ip--172--31--29--247-k8s-calico--kube--controllers--54bbb49cd4--pb4fm-eth0" Jan 17 00:23:31.176037 containerd[2105]: 2026-01-17 00:23:31.168 [INFO][6100] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:31.176037 containerd[2105]: 2026-01-17 00:23:31.172 [INFO][6093] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" Jan 17 00:23:31.176037 containerd[2105]: time="2026-01-17T00:23:31.175697227Z" level=info msg="TearDown network for sandbox \"26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867\" successfully" Jan 17 00:23:31.176037 containerd[2105]: time="2026-01-17T00:23:31.175732204Z" level=info msg="StopPodSandbox for \"26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867\" returns successfully" Jan 17 00:23:31.178270 containerd[2105]: time="2026-01-17T00:23:31.178211194Z" level=info msg="RemovePodSandbox for \"26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867\"" Jan 17 00:23:31.178409 containerd[2105]: time="2026-01-17T00:23:31.178363661Z" level=info msg="Forcibly stopping sandbox \"26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867\"" Jan 17 00:23:31.281736 containerd[2105]: 2026-01-17 00:23:31.237 [WARNING][6115] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--247-k8s-calico--kube--controllers--54bbb49cd4--pb4fm-eth0", GenerateName:"calico-kube-controllers-54bbb49cd4-", Namespace:"calico-system", SelfLink:"", UID:"2c85088d-5853-486f-a2a6-a1b33d923ebd", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54bbb49cd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-247", ContainerID:"9943749f05ee2fadf6f89ce6decafb30740631058c516ba758a579c407c9bb6b", Pod:"calico-kube-controllers-54bbb49cd4-pb4fm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.127.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali033deb2ecb2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:31.281736 containerd[2105]: 2026-01-17 00:23:31.238 [INFO][6115] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" Jan 17 00:23:31.281736 containerd[2105]: 2026-01-17 00:23:31.238 [INFO][6115] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" iface="eth0" netns="" Jan 17 00:23:31.281736 containerd[2105]: 2026-01-17 00:23:31.238 [INFO][6115] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" Jan 17 00:23:31.281736 containerd[2105]: 2026-01-17 00:23:31.238 [INFO][6115] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" Jan 17 00:23:31.281736 containerd[2105]: 2026-01-17 00:23:31.268 [INFO][6122] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" HandleID="k8s-pod-network.26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" Workload="ip--172--31--29--247-k8s-calico--kube--controllers--54bbb49cd4--pb4fm-eth0" Jan 17 00:23:31.281736 containerd[2105]: 2026-01-17 00:23:31.268 [INFO][6122] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:31.281736 containerd[2105]: 2026-01-17 00:23:31.268 [INFO][6122] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:31.281736 containerd[2105]: 2026-01-17 00:23:31.275 [WARNING][6122] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" HandleID="k8s-pod-network.26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" Workload="ip--172--31--29--247-k8s-calico--kube--controllers--54bbb49cd4--pb4fm-eth0" Jan 17 00:23:31.281736 containerd[2105]: 2026-01-17 00:23:31.275 [INFO][6122] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" HandleID="k8s-pod-network.26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" Workload="ip--172--31--29--247-k8s-calico--kube--controllers--54bbb49cd4--pb4fm-eth0" Jan 17 00:23:31.281736 containerd[2105]: 2026-01-17 00:23:31.276 [INFO][6122] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:31.281736 containerd[2105]: 2026-01-17 00:23:31.279 [INFO][6115] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867" Jan 17 00:23:31.282436 containerd[2105]: time="2026-01-17T00:23:31.281784185Z" level=info msg="TearDown network for sandbox \"26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867\" successfully" Jan 17 00:23:31.287778 containerd[2105]: time="2026-01-17T00:23:31.287703367Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:23:31.287916 containerd[2105]: time="2026-01-17T00:23:31.287807204Z" level=info msg="RemovePodSandbox \"26272e339bdb066bfaad3c7970af463d3c46c5c1fc8da454b05648854f729867\" returns successfully" Jan 17 00:23:31.288473 containerd[2105]: time="2026-01-17T00:23:31.288428537Z" level=info msg="StopPodSandbox for \"48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518\"" Jan 17 00:23:31.332475 containerd[2105]: time="2026-01-17T00:23:31.331393661Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:23:31.335552 containerd[2105]: time="2026-01-17T00:23:31.335423214Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:23:31.335552 containerd[2105]: time="2026-01-17T00:23:31.335510318Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:23:31.336008 kubelet[3360]: E0117 00:23:31.335961 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:23:31.336111 kubelet[3360]: E0117 00:23:31.336013 3360 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:23:31.336176 kubelet[3360]: E0117 00:23:31.336139 3360 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jzjv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64d946f8bb-fs6r2_calico-system(6e903c9f-05d5-45fd-9d78-2d7516aa0977): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:23:31.337886 kubelet[3360]: E0117 00:23:31.337842 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64d946f8bb-fs6r2" podUID="6e903c9f-05d5-45fd-9d78-2d7516aa0977" Jan 17 00:23:31.379575 containerd[2105]: 2026-01-17 00:23:31.328 [WARNING][6136] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--247-k8s-coredns--668d6bf9bc--qm2t4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c635936a-f4da-49b1-a5f7-daacf10da049", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-247", ContainerID:"816497aec2e9b7dd84fd3e89fe335f3c94941fb8a32993573306d7b37e5ec874", Pod:"coredns-668d6bf9bc-qm2t4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4283b6259fd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:31.379575 containerd[2105]: 2026-01-17 00:23:31.329 [INFO][6136] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" Jan 17 00:23:31.379575 containerd[2105]: 2026-01-17 00:23:31.329 [INFO][6136] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" iface="eth0" netns="" Jan 17 00:23:31.379575 containerd[2105]: 2026-01-17 00:23:31.329 [INFO][6136] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" Jan 17 00:23:31.379575 containerd[2105]: 2026-01-17 00:23:31.329 [INFO][6136] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" Jan 17 00:23:31.379575 containerd[2105]: 2026-01-17 00:23:31.365 [INFO][6143] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" HandleID="k8s-pod-network.48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" Workload="ip--172--31--29--247-k8s-coredns--668d6bf9bc--qm2t4-eth0" Jan 17 00:23:31.379575 containerd[2105]: 2026-01-17 00:23:31.365 [INFO][6143] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:31.379575 containerd[2105]: 2026-01-17 00:23:31.365 [INFO][6143] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:31.379575 containerd[2105]: 2026-01-17 00:23:31.373 [WARNING][6143] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" HandleID="k8s-pod-network.48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" Workload="ip--172--31--29--247-k8s-coredns--668d6bf9bc--qm2t4-eth0" Jan 17 00:23:31.379575 containerd[2105]: 2026-01-17 00:23:31.373 [INFO][6143] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" HandleID="k8s-pod-network.48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" Workload="ip--172--31--29--247-k8s-coredns--668d6bf9bc--qm2t4-eth0" Jan 17 00:23:31.379575 containerd[2105]: 2026-01-17 00:23:31.375 [INFO][6143] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:31.379575 containerd[2105]: 2026-01-17 00:23:31.377 [INFO][6136] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" Jan 17 00:23:31.379575 containerd[2105]: time="2026-01-17T00:23:31.379418068Z" level=info msg="TearDown network for sandbox \"48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518\" successfully" Jan 17 00:23:31.379575 containerd[2105]: time="2026-01-17T00:23:31.379441001Z" level=info msg="StopPodSandbox for \"48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518\" returns successfully" Jan 17 00:23:31.382680 containerd[2105]: time="2026-01-17T00:23:31.380261205Z" level=info msg="RemovePodSandbox for \"48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518\"" Jan 17 00:23:31.382680 containerd[2105]: time="2026-01-17T00:23:31.380296656Z" level=info msg="Forcibly stopping sandbox \"48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518\"" Jan 17 00:23:31.463135 containerd[2105]: 2026-01-17 00:23:31.423 [WARNING][6157] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--247-k8s-coredns--668d6bf9bc--qm2t4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c635936a-f4da-49b1-a5f7-daacf10da049", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-247", ContainerID:"816497aec2e9b7dd84fd3e89fe335f3c94941fb8a32993573306d7b37e5ec874", Pod:"coredns-668d6bf9bc-qm2t4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4283b6259fd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:31.463135 containerd[2105]: 2026-01-17 00:23:31.424 [INFO][6157] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" Jan 17 00:23:31.463135 containerd[2105]: 2026-01-17 00:23:31.424 [INFO][6157] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" iface="eth0" netns="" Jan 17 00:23:31.463135 containerd[2105]: 2026-01-17 00:23:31.424 [INFO][6157] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" Jan 17 00:23:31.463135 containerd[2105]: 2026-01-17 00:23:31.424 [INFO][6157] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" Jan 17 00:23:31.463135 containerd[2105]: 2026-01-17 00:23:31.450 [INFO][6164] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" HandleID="k8s-pod-network.48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" Workload="ip--172--31--29--247-k8s-coredns--668d6bf9bc--qm2t4-eth0" Jan 17 00:23:31.463135 containerd[2105]: 2026-01-17 00:23:31.450 [INFO][6164] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:31.463135 containerd[2105]: 2026-01-17 00:23:31.450 [INFO][6164] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:31.463135 containerd[2105]: 2026-01-17 00:23:31.456 [WARNING][6164] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" HandleID="k8s-pod-network.48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" Workload="ip--172--31--29--247-k8s-coredns--668d6bf9bc--qm2t4-eth0" Jan 17 00:23:31.463135 containerd[2105]: 2026-01-17 00:23:31.456 [INFO][6164] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" HandleID="k8s-pod-network.48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" Workload="ip--172--31--29--247-k8s-coredns--668d6bf9bc--qm2t4-eth0" Jan 17 00:23:31.463135 containerd[2105]: 2026-01-17 00:23:31.458 [INFO][6164] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:31.463135 containerd[2105]: 2026-01-17 00:23:31.460 [INFO][6157] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518" Jan 17 00:23:31.463135 containerd[2105]: time="2026-01-17T00:23:31.462781155Z" level=info msg="TearDown network for sandbox \"48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518\" successfully" Jan 17 00:23:31.468276 containerd[2105]: time="2026-01-17T00:23:31.468138788Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:23:31.468276 containerd[2105]: time="2026-01-17T00:23:31.468207452Z" level=info msg="RemovePodSandbox \"48e910dd9f2cce464f250c797c9ca1ab2d9aff522047ce008dc86872d7650518\" returns successfully" Jan 17 00:23:31.468668 containerd[2105]: time="2026-01-17T00:23:31.468639725Z" level=info msg="StopPodSandbox for \"a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1\"" Jan 17 00:23:31.553796 containerd[2105]: 2026-01-17 00:23:31.513 [WARNING][6179] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--247-k8s-coredns--668d6bf9bc--tv59c-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ba1548f7-6605-4885-a26c-3f894994808a", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-247", ContainerID:"fd0ccc5ba3336c8cd927145b94c1878e9d6c27584f56f52f6c7a4e0d1d46ed12", Pod:"coredns-668d6bf9bc-tv59c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali97d0b4063b2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:31.553796 containerd[2105]: 2026-01-17 00:23:31.514 [INFO][6179] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" Jan 17 00:23:31.553796 containerd[2105]: 2026-01-17 00:23:31.514 [INFO][6179] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" iface="eth0" netns="" Jan 17 00:23:31.553796 containerd[2105]: 2026-01-17 00:23:31.514 [INFO][6179] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" Jan 17 00:23:31.553796 containerd[2105]: 2026-01-17 00:23:31.514 [INFO][6179] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" Jan 17 00:23:31.553796 containerd[2105]: 2026-01-17 00:23:31.540 [INFO][6187] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" HandleID="k8s-pod-network.a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" Workload="ip--172--31--29--247-k8s-coredns--668d6bf9bc--tv59c-eth0" Jan 17 00:23:31.553796 containerd[2105]: 2026-01-17 00:23:31.540 [INFO][6187] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:31.553796 containerd[2105]: 2026-01-17 00:23:31.540 [INFO][6187] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:31.553796 containerd[2105]: 2026-01-17 00:23:31.547 [WARNING][6187] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" HandleID="k8s-pod-network.a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" Workload="ip--172--31--29--247-k8s-coredns--668d6bf9bc--tv59c-eth0" Jan 17 00:23:31.553796 containerd[2105]: 2026-01-17 00:23:31.547 [INFO][6187] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" HandleID="k8s-pod-network.a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" Workload="ip--172--31--29--247-k8s-coredns--668d6bf9bc--tv59c-eth0" Jan 17 00:23:31.553796 containerd[2105]: 2026-01-17 00:23:31.549 [INFO][6187] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:31.553796 containerd[2105]: 2026-01-17 00:23:31.551 [INFO][6179] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" Jan 17 00:23:31.554635 containerd[2105]: time="2026-01-17T00:23:31.553827999Z" level=info msg="TearDown network for sandbox \"a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1\" successfully" Jan 17 00:23:31.554635 containerd[2105]: time="2026-01-17T00:23:31.553856033Z" level=info msg="StopPodSandbox for \"a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1\" returns successfully" Jan 17 00:23:31.555222 containerd[2105]: time="2026-01-17T00:23:31.555180636Z" level=info msg="RemovePodSandbox for \"a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1\"" Jan 17 00:23:31.555333 containerd[2105]: time="2026-01-17T00:23:31.555231783Z" level=info msg="Forcibly stopping sandbox \"a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1\"" Jan 17 00:23:31.642790 containerd[2105]: 2026-01-17 00:23:31.599 [WARNING][6202] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--247-k8s-coredns--668d6bf9bc--tv59c-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ba1548f7-6605-4885-a26c-3f894994808a", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-247", ContainerID:"fd0ccc5ba3336c8cd927145b94c1878e9d6c27584f56f52f6c7a4e0d1d46ed12", Pod:"coredns-668d6bf9bc-tv59c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali97d0b4063b2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:31.642790 containerd[2105]: 2026-01-17 00:23:31.600 [INFO][6202] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" Jan 17 00:23:31.642790 containerd[2105]: 2026-01-17 00:23:31.600 [INFO][6202] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" iface="eth0" netns="" Jan 17 00:23:31.642790 containerd[2105]: 2026-01-17 00:23:31.600 [INFO][6202] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" Jan 17 00:23:31.642790 containerd[2105]: 2026-01-17 00:23:31.600 [INFO][6202] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" Jan 17 00:23:31.642790 containerd[2105]: 2026-01-17 00:23:31.627 [INFO][6210] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" HandleID="k8s-pod-network.a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" Workload="ip--172--31--29--247-k8s-coredns--668d6bf9bc--tv59c-eth0" Jan 17 00:23:31.642790 containerd[2105]: 2026-01-17 00:23:31.627 [INFO][6210] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:31.642790 containerd[2105]: 2026-01-17 00:23:31.628 [INFO][6210] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:31.642790 containerd[2105]: 2026-01-17 00:23:31.635 [WARNING][6210] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" HandleID="k8s-pod-network.a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" Workload="ip--172--31--29--247-k8s-coredns--668d6bf9bc--tv59c-eth0" Jan 17 00:23:31.642790 containerd[2105]: 2026-01-17 00:23:31.635 [INFO][6210] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" HandleID="k8s-pod-network.a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" Workload="ip--172--31--29--247-k8s-coredns--668d6bf9bc--tv59c-eth0" Jan 17 00:23:31.642790 containerd[2105]: 2026-01-17 00:23:31.637 [INFO][6210] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:31.642790 containerd[2105]: 2026-01-17 00:23:31.640 [INFO][6202] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1" Jan 17 00:23:31.642790 containerd[2105]: time="2026-01-17T00:23:31.642751280Z" level=info msg="TearDown network for sandbox \"a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1\" successfully" Jan 17 00:23:31.648885 containerd[2105]: time="2026-01-17T00:23:31.648810862Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:23:31.648885 containerd[2105]: time="2026-01-17T00:23:31.648879340Z" level=info msg="RemovePodSandbox \"a462251376b8fd77137a53c43c580e1e44367ce98a85d96cd56fa15b081b76d1\" returns successfully" Jan 17 00:23:31.649687 containerd[2105]: time="2026-01-17T00:23:31.649657799Z" level=info msg="StopPodSandbox for \"fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d\"" Jan 17 00:23:31.729826 containerd[2105]: 2026-01-17 00:23:31.687 [WARNING][6224] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--247-k8s-csi--node--driver--hbb8z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d7198563-8b4e-4b52-ad88-2f9e6d09e79c", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-247", ContainerID:"3448c9e74aad229ec2941c23fa797b226eca03aadad36593888e6c6c00bbc96b", Pod:"csi-node-driver-hbb8z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.127.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif36c14020d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:31.729826 containerd[2105]: 2026-01-17 00:23:31.688 [INFO][6224] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" Jan 17 00:23:31.729826 containerd[2105]: 2026-01-17 00:23:31.688 [INFO][6224] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" iface="eth0" netns="" Jan 17 00:23:31.729826 containerd[2105]: 2026-01-17 00:23:31.688 [INFO][6224] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" Jan 17 00:23:31.729826 containerd[2105]: 2026-01-17 00:23:31.688 [INFO][6224] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" Jan 17 00:23:31.729826 containerd[2105]: 2026-01-17 00:23:31.716 [INFO][6233] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" HandleID="k8s-pod-network.fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" Workload="ip--172--31--29--247-k8s-csi--node--driver--hbb8z-eth0" Jan 17 00:23:31.729826 containerd[2105]: 2026-01-17 00:23:31.716 [INFO][6233] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:31.729826 containerd[2105]: 2026-01-17 00:23:31.716 [INFO][6233] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:31.729826 containerd[2105]: 2026-01-17 00:23:31.723 [WARNING][6233] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" HandleID="k8s-pod-network.fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" Workload="ip--172--31--29--247-k8s-csi--node--driver--hbb8z-eth0" Jan 17 00:23:31.729826 containerd[2105]: 2026-01-17 00:23:31.723 [INFO][6233] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" HandleID="k8s-pod-network.fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" Workload="ip--172--31--29--247-k8s-csi--node--driver--hbb8z-eth0" Jan 17 00:23:31.729826 containerd[2105]: 2026-01-17 00:23:31.725 [INFO][6233] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:31.729826 containerd[2105]: 2026-01-17 00:23:31.727 [INFO][6224] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" Jan 17 00:23:31.730318 containerd[2105]: time="2026-01-17T00:23:31.729862888Z" level=info msg="TearDown network for sandbox \"fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d\" successfully" Jan 17 00:23:31.730318 containerd[2105]: time="2026-01-17T00:23:31.729890280Z" level=info msg="StopPodSandbox for \"fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d\" returns successfully" Jan 17 00:23:31.730383 containerd[2105]: time="2026-01-17T00:23:31.730342327Z" level=info msg="RemovePodSandbox for \"fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d\"" Jan 17 00:23:31.730383 containerd[2105]: time="2026-01-17T00:23:31.730365963Z" level=info msg="Forcibly stopping sandbox \"fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d\"" Jan 17 00:23:31.756883 systemd[1]: Started sshd@9-172.31.29.247:22-4.153.228.146:60246.service - OpenSSH per-connection server daemon (4.153.228.146:60246). Jan 17 00:23:31.843404 containerd[2105]: 2026-01-17 00:23:31.800 [WARNING][6247] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--247-k8s-csi--node--driver--hbb8z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d7198563-8b4e-4b52-ad88-2f9e6d09e79c", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-247", ContainerID:"3448c9e74aad229ec2941c23fa797b226eca03aadad36593888e6c6c00bbc96b", Pod:"csi-node-driver-hbb8z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.127.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif36c14020d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:31.843404 containerd[2105]: 2026-01-17 00:23:31.801 [INFO][6247] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" Jan 17 00:23:31.843404 containerd[2105]: 2026-01-17 00:23:31.801 [INFO][6247] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" iface="eth0" netns="" Jan 17 00:23:31.843404 containerd[2105]: 2026-01-17 00:23:31.801 [INFO][6247] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" Jan 17 00:23:31.843404 containerd[2105]: 2026-01-17 00:23:31.801 [INFO][6247] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" Jan 17 00:23:31.843404 containerd[2105]: 2026-01-17 00:23:31.830 [INFO][6261] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" HandleID="k8s-pod-network.fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" Workload="ip--172--31--29--247-k8s-csi--node--driver--hbb8z-eth0" Jan 17 00:23:31.843404 containerd[2105]: 2026-01-17 00:23:31.830 [INFO][6261] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:31.843404 containerd[2105]: 2026-01-17 00:23:31.830 [INFO][6261] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:31.843404 containerd[2105]: 2026-01-17 00:23:31.837 [WARNING][6261] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" HandleID="k8s-pod-network.fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" Workload="ip--172--31--29--247-k8s-csi--node--driver--hbb8z-eth0" Jan 17 00:23:31.843404 containerd[2105]: 2026-01-17 00:23:31.837 [INFO][6261] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" HandleID="k8s-pod-network.fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" Workload="ip--172--31--29--247-k8s-csi--node--driver--hbb8z-eth0" Jan 17 00:23:31.843404 containerd[2105]: 2026-01-17 00:23:31.839 [INFO][6261] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:31.843404 containerd[2105]: 2026-01-17 00:23:31.841 [INFO][6247] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d" Jan 17 00:23:31.844844 containerd[2105]: time="2026-01-17T00:23:31.843910210Z" level=info msg="TearDown network for sandbox \"fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d\" successfully" Jan 17 00:23:31.849668 containerd[2105]: time="2026-01-17T00:23:31.849607221Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:23:31.849801 containerd[2105]: time="2026-01-17T00:23:31.849700567Z" level=info msg="RemovePodSandbox \"fc6dde19b2f83e8ef364aaaca4dd06a41bc73b828a62a271d5989b1980a9f77d\" returns successfully" Jan 17 00:23:31.850841 containerd[2105]: time="2026-01-17T00:23:31.850421582Z" level=info msg="StopPodSandbox for \"49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4\"" Jan 17 00:23:31.932461 containerd[2105]: 2026-01-17 00:23:31.888 [WARNING][6276] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--7zrtb-eth0", GenerateName:"calico-apiserver-d8d9c5b87-", Namespace:"calico-apiserver", SelfLink:"", UID:"2207401f-e738-47bd-8283-8eef3cbcb7c1", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d8d9c5b87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-247", ContainerID:"6122deef238d4a59feb94fa3c79e7990eb69121d11e20992ae4d176ed6e2bc20", Pod:"calico-apiserver-d8d9c5b87-7zrtb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2f87eb0c668", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:31.932461 containerd[2105]: 2026-01-17 00:23:31.889 [INFO][6276] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" Jan 17 00:23:31.932461 containerd[2105]: 2026-01-17 00:23:31.889 [INFO][6276] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" iface="eth0" netns="" Jan 17 00:23:31.932461 containerd[2105]: 2026-01-17 00:23:31.889 [INFO][6276] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" Jan 17 00:23:31.932461 containerd[2105]: 2026-01-17 00:23:31.889 [INFO][6276] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" Jan 17 00:23:31.932461 containerd[2105]: 2026-01-17 00:23:31.916 [INFO][6283] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" HandleID="k8s-pod-network.49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" Workload="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--7zrtb-eth0" Jan 17 00:23:31.932461 containerd[2105]: 2026-01-17 00:23:31.916 [INFO][6283] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:31.932461 containerd[2105]: 2026-01-17 00:23:31.917 [INFO][6283] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:31.932461 containerd[2105]: 2026-01-17 00:23:31.924 [WARNING][6283] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" HandleID="k8s-pod-network.49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" Workload="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--7zrtb-eth0" Jan 17 00:23:31.932461 containerd[2105]: 2026-01-17 00:23:31.924 [INFO][6283] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" HandleID="k8s-pod-network.49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" Workload="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--7zrtb-eth0" Jan 17 00:23:31.932461 containerd[2105]: 2026-01-17 00:23:31.926 [INFO][6283] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:31.932461 containerd[2105]: 2026-01-17 00:23:31.928 [INFO][6276] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" Jan 17 00:23:31.932461 containerd[2105]: time="2026-01-17T00:23:31.932345671Z" level=info msg="TearDown network for sandbox \"49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4\" successfully" Jan 17 00:23:31.932461 containerd[2105]: time="2026-01-17T00:23:31.932368835Z" level=info msg="StopPodSandbox for \"49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4\" returns successfully" Jan 17 00:23:31.933539 containerd[2105]: time="2026-01-17T00:23:31.933215259Z" level=info msg="RemovePodSandbox for \"49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4\"" Jan 17 00:23:31.933539 containerd[2105]: time="2026-01-17T00:23:31.933254702Z" level=info msg="Forcibly stopping sandbox \"49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4\"" Jan 17 00:23:32.019229 containerd[2105]: 2026-01-17 00:23:31.978 [WARNING][6297] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--7zrtb-eth0", GenerateName:"calico-apiserver-d8d9c5b87-", Namespace:"calico-apiserver", SelfLink:"", UID:"2207401f-e738-47bd-8283-8eef3cbcb7c1", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 22, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d8d9c5b87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-247", ContainerID:"6122deef238d4a59feb94fa3c79e7990eb69121d11e20992ae4d176ed6e2bc20", Pod:"calico-apiserver-d8d9c5b87-7zrtb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2f87eb0c668", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:23:32.019229 containerd[2105]: 2026-01-17 00:23:31.979 [INFO][6297] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" Jan 17 00:23:32.019229 containerd[2105]: 2026-01-17 00:23:31.979 [INFO][6297] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" iface="eth0" netns="" Jan 17 00:23:32.019229 containerd[2105]: 2026-01-17 00:23:31.979 [INFO][6297] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" Jan 17 00:23:32.019229 containerd[2105]: 2026-01-17 00:23:31.979 [INFO][6297] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" Jan 17 00:23:32.019229 containerd[2105]: 2026-01-17 00:23:32.005 [INFO][6304] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" HandleID="k8s-pod-network.49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" Workload="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--7zrtb-eth0" Jan 17 00:23:32.019229 containerd[2105]: 2026-01-17 00:23:32.006 [INFO][6304] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:23:32.019229 containerd[2105]: 2026-01-17 00:23:32.006 [INFO][6304] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:23:32.019229 containerd[2105]: 2026-01-17 00:23:32.013 [WARNING][6304] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" HandleID="k8s-pod-network.49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" Workload="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--7zrtb-eth0" Jan 17 00:23:32.019229 containerd[2105]: 2026-01-17 00:23:32.013 [INFO][6304] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" HandleID="k8s-pod-network.49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" Workload="ip--172--31--29--247-k8s-calico--apiserver--d8d9c5b87--7zrtb-eth0" Jan 17 00:23:32.019229 containerd[2105]: 2026-01-17 00:23:32.015 [INFO][6304] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:23:32.019229 containerd[2105]: 2026-01-17 00:23:32.017 [INFO][6297] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4" Jan 17 00:23:32.019894 containerd[2105]: time="2026-01-17T00:23:32.019257946Z" level=info msg="TearDown network for sandbox \"49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4\" successfully" Jan 17 00:23:32.024704 containerd[2105]: time="2026-01-17T00:23:32.024648354Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:23:32.024992 containerd[2105]: time="2026-01-17T00:23:32.024714081Z" level=info msg="RemovePodSandbox \"49affb1dc4232257bc231469b639dfe93b294e9b7d7ce8931a71b254a85190b4\" returns successfully" Jan 17 00:23:32.305919 sshd[6251]: Accepted publickey for core from 4.153.228.146 port 60246 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:23:32.339579 sshd[6251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:32.345717 systemd-logind[2080]: New session 10 of user core. Jan 17 00:23:32.349594 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:23:32.434662 containerd[2105]: time="2026-01-17T00:23:32.434275538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:23:32.700815 containerd[2105]: time="2026-01-17T00:23:32.700765970Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:23:32.702998 containerd[2105]: time="2026-01-17T00:23:32.702885168Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:23:32.702998 containerd[2105]: time="2026-01-17T00:23:32.702951525Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:23:32.703232 kubelet[3360]: E0117 00:23:32.703116 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:23:32.703232 kubelet[3360]: E0117 00:23:32.703158 3360 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:23:32.704640 kubelet[3360]: E0117 00:23:32.703271 3360 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gh4hl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d8d9c5b87-7zrtb_calico-apiserver(2207401f-e738-47bd-8283-8eef3cbcb7c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:23:32.704899 kubelet[3360]: E0117 00:23:32.704857 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d9c5b87-7zrtb" podUID="2207401f-e738-47bd-8283-8eef3cbcb7c1" Jan 17 00:23:32.756471 sshd[6251]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:32.759778 systemd[1]: sshd@9-172.31.29.247:22-4.153.228.146:60246.service: Deactivated successfully. Jan 17 00:23:32.764413 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:23:32.765195 systemd-logind[2080]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:23:32.766490 systemd-logind[2080]: Removed session 10. Jan 17 00:23:32.838597 systemd[1]: Started sshd@10-172.31.29.247:22-4.153.228.146:60248.service - OpenSSH per-connection server daemon (4.153.228.146:60248). Jan 17 00:23:33.315909 sshd[6325]: Accepted publickey for core from 4.153.228.146 port 60248 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:23:33.317547 sshd[6325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:33.322778 systemd-logind[2080]: New session 11 of user core. Jan 17 00:23:33.329660 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:23:33.434571 containerd[2105]: time="2026-01-17T00:23:33.434215494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:23:33.727371 containerd[2105]: time="2026-01-17T00:23:33.727318017Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:23:33.730132 containerd[2105]: time="2026-01-17T00:23:33.730030095Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:23:33.730345 containerd[2105]: time="2026-01-17T00:23:33.730179099Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:23:33.731630 kubelet[3360]: E0117 00:23:33.731580 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:23:33.733437 kubelet[3360]: E0117 00:23:33.731654 3360 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:23:33.733437 kubelet[3360]: E0117 00:23:33.731820 3360 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xxc9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d8d9c5b87-h9bhg_calico-apiserver(19297f6f-5ccc-4eab-996b-36acef548d9c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:23:33.735574 kubelet[3360]: E0117 00:23:33.735489 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d9c5b87-h9bhg" podUID="19297f6f-5ccc-4eab-996b-36acef548d9c" Jan 17 00:23:33.796796 sshd[6325]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:33.800779 systemd[1]: sshd@10-172.31.29.247:22-4.153.228.146:60248.service: Deactivated successfully. Jan 17 00:23:33.803851 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:23:33.804768 systemd-logind[2080]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:23:33.806540 systemd-logind[2080]: Removed session 11. Jan 17 00:23:33.879661 systemd[1]: Started sshd@11-172.31.29.247:22-4.153.228.146:60254.service - OpenSSH per-connection server daemon (4.153.228.146:60254). Jan 17 00:23:34.357693 sshd[6337]: Accepted publickey for core from 4.153.228.146 port 60254 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:23:34.359482 sshd[6337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:34.364932 systemd-logind[2080]: New session 12 of user core. Jan 17 00:23:34.374395 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:23:34.436934 containerd[2105]: time="2026-01-17T00:23:34.436364653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:23:34.720172 containerd[2105]: time="2026-01-17T00:23:34.720101285Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:23:34.723448 containerd[2105]: time="2026-01-17T00:23:34.722159472Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:23:34.723448 containerd[2105]: time="2026-01-17T00:23:34.722264114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:23:34.726968 kubelet[3360]: E0117 00:23:34.722448 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:23:34.726968 kubelet[3360]: E0117 00:23:34.722492 3360 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:23:34.726968 kubelet[3360]: E0117 00:23:34.722687 3360 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tlttd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hbb8z_calico-system(d7198563-8b4e-4b52-ad88-2f9e6d09e79c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:23:34.728806 containerd[2105]: time="2026-01-17T00:23:34.727661325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:23:34.790686 sshd[6337]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:34.794255 systemd[1]: sshd@11-172.31.29.247:22-4.153.228.146:60254.service: Deactivated successfully. Jan 17 00:23:34.798525 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:23:34.799480 systemd-logind[2080]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:23:34.800502 systemd-logind[2080]: Removed session 12. Jan 17 00:23:35.002122 containerd[2105]: time="2026-01-17T00:23:35.001947681Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:23:35.005224 containerd[2105]: time="2026-01-17T00:23:35.005144178Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:23:35.005567 containerd[2105]: time="2026-01-17T00:23:35.005180871Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:23:35.005735 kubelet[3360]: E0117 00:23:35.005682 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:23:35.006608 kubelet[3360]: E0117 00:23:35.005740 3360 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:23:35.006608 kubelet[3360]: E0117 00:23:35.005883 3360 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tlttd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hbb8z_calico-system(d7198563-8b4e-4b52-ad88-2f9e6d09e79c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:23:35.007669 kubelet[3360]: E0117 00:23:35.007600 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hbb8z" podUID="d7198563-8b4e-4b52-ad88-2f9e6d09e79c" Jan 17 00:23:36.434238 containerd[2105]: time="2026-01-17T00:23:36.433982575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:23:36.748451 containerd[2105]: time="2026-01-17T00:23:36.748304424Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:23:36.750867 containerd[2105]: time="2026-01-17T00:23:36.750808493Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:23:36.750867 containerd[2105]: time="2026-01-17T00:23:36.750825534Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:23:36.751119 kubelet[3360]: E0117 00:23:36.751019 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:23:36.751119 kubelet[3360]: E0117 00:23:36.751078 3360 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:23:36.751643 kubelet[3360]: E0117 00:23:36.751206 3360 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-27bk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-22ww4_calico-system(804c4956-a77e-4057-9db7-9d50191156a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:23:36.752737 kubelet[3360]: E0117 00:23:36.752645 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-22ww4" podUID="804c4956-a77e-4057-9db7-9d50191156a3" Jan 17 00:23:39.873528 systemd[1]: Started sshd@12-172.31.29.247:22-4.153.228.146:54322.service - OpenSSH per-connection server daemon (4.153.228.146:54322). Jan 17 00:23:40.352809 sshd[6363]: Accepted publickey for core from 4.153.228.146 port 54322 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:23:40.354918 sshd[6363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:40.360492 systemd-logind[2080]: New session 13 of user core. Jan 17 00:23:40.370576 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:23:40.774425 sshd[6363]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:40.779509 systemd[1]: sshd@12-172.31.29.247:22-4.153.228.146:54322.service: Deactivated successfully. Jan 17 00:23:40.784785 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:23:40.784974 systemd-logind[2080]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:23:40.787157 systemd-logind[2080]: Removed session 13. Jan 17 00:23:42.434828 kubelet[3360]: E0117 00:23:42.434761 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64d946f8bb-fs6r2" podUID="6e903c9f-05d5-45fd-9d78-2d7516aa0977" Jan 17 00:23:44.434093 kubelet[3360]: E0117 00:23:44.432934 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54bbb49cd4-pb4fm" podUID="2c85088d-5853-486f-a2a6-a1b33d923ebd" Jan 17 00:23:45.869881 systemd[1]: Started sshd@13-172.31.29.247:22-4.153.228.146:33086.service - OpenSSH per-connection server daemon (4.153.228.146:33086). Jan 17 00:23:46.395486 sshd[6400]: Accepted publickey for core from 4.153.228.146 port 33086 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:23:46.396670 sshd[6400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:46.403270 systemd-logind[2080]: New session 14 of user core. Jan 17 00:23:46.408427 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:23:46.434564 kubelet[3360]: E0117 00:23:46.433920 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d9c5b87-7zrtb" podUID="2207401f-e738-47bd-8283-8eef3cbcb7c1" Jan 17 00:23:46.883670 sshd[6400]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:46.890913 systemd-logind[2080]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:23:46.892474 systemd[1]: sshd@13-172.31.29.247:22-4.153.228.146:33086.service: Deactivated successfully. Jan 17 00:23:46.896357 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:23:46.898466 systemd-logind[2080]: Removed session 14. Jan 17 00:23:47.440537 kubelet[3360]: E0117 00:23:47.440217 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d9c5b87-h9bhg" podUID="19297f6f-5ccc-4eab-996b-36acef548d9c" Jan 17 00:23:49.434761 kubelet[3360]: E0117 00:23:49.434091 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-22ww4" podUID="804c4956-a77e-4057-9db7-9d50191156a3" Jan 17 00:23:49.435214 kubelet[3360]: E0117 00:23:49.434938 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hbb8z" podUID="d7198563-8b4e-4b52-ad88-2f9e6d09e79c" Jan 17 00:23:51.964841 systemd[1]: Started sshd@14-172.31.29.247:22-4.153.228.146:33100.service - OpenSSH per-connection server daemon (4.153.228.146:33100). Jan 17 00:23:52.457476 sshd[6414]: Accepted publickey for core from 4.153.228.146 port 33100 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:23:52.460264 sshd[6414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:52.471212 systemd-logind[2080]: New session 15 of user core. Jan 17 00:23:52.476871 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:23:53.097801 sshd[6414]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:53.110561 systemd[1]: sshd@14-172.31.29.247:22-4.153.228.146:33100.service: Deactivated successfully. Jan 17 00:23:53.110605 systemd-logind[2080]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:23:53.116810 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:23:53.136569 systemd-logind[2080]: Removed session 15. Jan 17 00:23:56.436970 containerd[2105]: time="2026-01-17T00:23:56.436002952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:23:56.724183 containerd[2105]: time="2026-01-17T00:23:56.723846258Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:23:56.727095 containerd[2105]: time="2026-01-17T00:23:56.726345817Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:23:56.727095 containerd[2105]: time="2026-01-17T00:23:56.726663513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:23:56.727453 kubelet[3360]: E0117 00:23:56.727021 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:23:56.729913 kubelet[3360]: E0117 00:23:56.727473 3360 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:23:56.729913 kubelet[3360]: E0117 00:23:56.727812 3360 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m8c5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-54bbb49cd4-pb4fm_calico-system(2c85088d-5853-486f-a2a6-a1b33d923ebd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:23:56.729913 kubelet[3360]: E0117 00:23:56.729490 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54bbb49cd4-pb4fm" podUID="2c85088d-5853-486f-a2a6-a1b33d923ebd" Jan 17 00:23:56.730531 containerd[2105]: time="2026-01-17T00:23:56.727853716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:23:56.962184 containerd[2105]: time="2026-01-17T00:23:56.962133580Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:23:56.964396 containerd[2105]: time="2026-01-17T00:23:56.964312363Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:23:56.964564 containerd[2105]: time="2026-01-17T00:23:56.964446558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:23:56.965013 kubelet[3360]: E0117 00:23:56.964779 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:23:56.965151 kubelet[3360]: E0117 00:23:56.965029 3360 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:23:56.965254 kubelet[3360]: E0117 00:23:56.965195 3360 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d6ba05377925445eb2d7612d02a08bcf,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jzjv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64d946f8bb-fs6r2_calico-system(6e903c9f-05d5-45fd-9d78-2d7516aa0977): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:23:56.968147 containerd[2105]: time="2026-01-17T00:23:56.968038949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:23:57.236761 containerd[2105]: time="2026-01-17T00:23:57.236710024Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:23:57.239164 containerd[2105]: time="2026-01-17T00:23:57.239090504Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:23:57.239164 containerd[2105]: time="2026-01-17T00:23:57.239183901Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:23:57.239635 kubelet[3360]: E0117 00:23:57.239583 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:23:57.239635 kubelet[3360]: E0117 00:23:57.239638 3360 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:23:57.239824 kubelet[3360]: E0117 00:23:57.239744 3360 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jzjv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64d946f8bb-fs6r2_calico-system(6e903c9f-05d5-45fd-9d78-2d7516aa0977): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:23:57.241390 kubelet[3360]: E0117 00:23:57.241245 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64d946f8bb-fs6r2" podUID="6e903c9f-05d5-45fd-9d78-2d7516aa0977" Jan 17 00:23:58.178371 systemd[1]: Started sshd@15-172.31.29.247:22-4.153.228.146:47026.service - OpenSSH per-connection server daemon (4.153.228.146:47026). Jan 17 00:23:58.729955 sshd[6435]: Accepted publickey for core from 4.153.228.146 port 47026 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:23:58.733275 sshd[6435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:58.741393 systemd-logind[2080]: New session 16 of user core. Jan 17 00:23:58.744400 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:23:59.355402 sshd[6435]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:59.358953 systemd[1]: sshd@15-172.31.29.247:22-4.153.228.146:47026.service: Deactivated successfully. Jan 17 00:23:59.368906 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:23:59.370664 systemd-logind[2080]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:23:59.371873 systemd-logind[2080]: Removed session 16. Jan 17 00:23:59.425478 systemd[1]: Started sshd@16-172.31.29.247:22-4.153.228.146:47032.service - OpenSSH per-connection server daemon (4.153.228.146:47032). Jan 17 00:23:59.437640 containerd[2105]: time="2026-01-17T00:23:59.437419711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:23:59.735967 containerd[2105]: time="2026-01-17T00:23:59.735714799Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:23:59.739841 containerd[2105]: time="2026-01-17T00:23:59.739354260Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:23:59.739841 containerd[2105]: time="2026-01-17T00:23:59.739750815Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:23:59.740106 kubelet[3360]: E0117 00:23:59.740026 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:23:59.740601 kubelet[3360]: E0117 00:23:59.740132 3360 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:23:59.740601 kubelet[3360]: E0117 00:23:59.740293 3360 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xxc9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d8d9c5b87-h9bhg_calico-apiserver(19297f6f-5ccc-4eab-996b-36acef548d9c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:23:59.742034 kubelet[3360]: E0117 00:23:59.741999 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d9c5b87-h9bhg" podUID="19297f6f-5ccc-4eab-996b-36acef548d9c" Jan 17 00:23:59.916282 sshd[6449]: Accepted publickey for core from 4.153.228.146 port 47032 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:23:59.918142 sshd[6449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:59.923268 systemd-logind[2080]: New session 17 of user core. Jan 17 00:23:59.931454 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:24:00.436376 containerd[2105]: time="2026-01-17T00:24:00.435402468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:24:00.715561 containerd[2105]: time="2026-01-17T00:24:00.715427346Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:24:00.717641 containerd[2105]: time="2026-01-17T00:24:00.717585448Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:24:00.717641 containerd[2105]: time="2026-01-17T00:24:00.717677052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:24:00.717924 kubelet[3360]: E0117 00:24:00.717817 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:24:00.717924 kubelet[3360]: E0117 00:24:00.717857 3360 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:24:00.718128 kubelet[3360]: E0117 00:24:00.718084 3360 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gh4hl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d8d9c5b87-7zrtb_calico-apiserver(2207401f-e738-47bd-8283-8eef3cbcb7c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:24:00.718696 containerd[2105]: time="2026-01-17T00:24:00.718591423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:24:00.719479 kubelet[3360]: E0117 00:24:00.719435 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d9c5b87-7zrtb" podUID="2207401f-e738-47bd-8283-8eef3cbcb7c1" Jan 17 00:24:00.990939 containerd[2105]: time="2026-01-17T00:24:00.990806771Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:24:00.993274 containerd[2105]: time="2026-01-17T00:24:00.993106375Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:24:00.993274 containerd[2105]: time="2026-01-17T00:24:00.993210146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:24:00.995082 kubelet[3360]: E0117 00:24:00.993388 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:24:00.995082 kubelet[3360]: E0117 00:24:00.993439 3360 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:24:00.995082 kubelet[3360]: E0117 00:24:00.993581 3360 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tlttd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hbb8z_calico-system(d7198563-8b4e-4b52-ad88-2f9e6d09e79c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:24:00.996861 containerd[2105]: time="2026-01-17T00:24:00.996478333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:24:01.265486 containerd[2105]: time="2026-01-17T00:24:01.265161664Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:24:01.268167 containerd[2105]: time="2026-01-17T00:24:01.267771591Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:24:01.268636 containerd[2105]: time="2026-01-17T00:24:01.267768238Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:24:01.269490 kubelet[3360]: E0117 00:24:01.268690 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:24:01.269490 kubelet[3360]: E0117 00:24:01.268775 3360 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:24:01.269812 kubelet[3360]: E0117 00:24:01.269737 3360 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tlttd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hbb8z_calico-system(d7198563-8b4e-4b52-ad88-2f9e6d09e79c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:24:01.281186 kubelet[3360]: E0117 00:24:01.271309 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hbb8z" podUID="d7198563-8b4e-4b52-ad88-2f9e6d09e79c" Jan 17 00:24:01.546623 containerd[2105]: time="2026-01-17T00:24:01.524461477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:24:02.116021 containerd[2105]: time="2026-01-17T00:24:02.115721282Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:24:02.134448 containerd[2105]: time="2026-01-17T00:24:02.133919848Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:24:02.134448 containerd[2105]: time="2026-01-17T00:24:02.134389375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:24:02.144711 kubelet[3360]: E0117 00:24:02.134755 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:24:02.144711 kubelet[3360]: E0117 00:24:02.134806 3360 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:24:02.144711 kubelet[3360]: E0117 00:24:02.134979 3360 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-27bk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-22ww4_calico-system(804c4956-a77e-4057-9db7-9d50191156a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:24:02.156023 kubelet[3360]: E0117 00:24:02.155966 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-22ww4" podUID="804c4956-a77e-4057-9db7-9d50191156a3" Jan 17 00:24:04.548066 sshd[6449]: pam_unix(sshd:session): session closed for user core Jan 17 00:24:04.557551 systemd[1]: sshd@16-172.31.29.247:22-4.153.228.146:47032.service: Deactivated successfully. Jan 17 00:24:04.563763 systemd-logind[2080]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:24:04.567595 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:24:04.569893 systemd-logind[2080]: Removed session 17. Jan 17 00:24:04.647906 systemd[1]: Started sshd@17-172.31.29.247:22-4.153.228.146:58314.service - OpenSSH per-connection server daemon (4.153.228.146:58314). Jan 17 00:24:05.183527 systemd-resolved[1998]: Under memory pressure, flushing caches. Jan 17 00:24:05.185335 systemd-journald[1577]: Under memory pressure, flushing caches. Jan 17 00:24:05.183575 systemd-resolved[1998]: Flushed all caches. Jan 17 00:24:05.242263 sshd[6463]: Accepted publickey for core from 4.153.228.146 port 58314 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:24:05.248512 sshd[6463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:24:05.277294 systemd-logind[2080]: New session 18 of user core. Jan 17 00:24:05.281987 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:24:06.730867 sshd[6463]: pam_unix(sshd:session): session closed for user core Jan 17 00:24:06.739450 systemd[1]: sshd@17-172.31.29.247:22-4.153.228.146:58314.service: Deactivated successfully. Jan 17 00:24:06.748401 systemd-logind[2080]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:24:06.757024 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:24:06.763243 systemd-logind[2080]: Removed session 18. Jan 17 00:24:06.817969 systemd[1]: Started sshd@18-172.31.29.247:22-4.153.228.146:58316.service - OpenSSH per-connection server daemon (4.153.228.146:58316). Jan 17 00:24:07.337194 sshd[6484]: Accepted publickey for core from 4.153.228.146 port 58316 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:24:07.338978 sshd[6484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:24:07.343782 systemd-logind[2080]: New session 19 of user core. Jan 17 00:24:07.349395 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:24:07.433542 kubelet[3360]: E0117 00:24:07.433151 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54bbb49cd4-pb4fm" podUID="2c85088d-5853-486f-a2a6-a1b33d923ebd" Jan 17 00:24:08.155745 sshd[6484]: pam_unix(sshd:session): session closed for user core Jan 17 00:24:08.159100 systemd[1]: sshd@18-172.31.29.247:22-4.153.228.146:58316.service: Deactivated successfully. Jan 17 00:24:08.163702 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:24:08.165249 systemd-logind[2080]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:24:08.166958 systemd-logind[2080]: Removed session 19. Jan 17 00:24:08.241807 systemd[1]: Started sshd@19-172.31.29.247:22-4.153.228.146:58332.service - OpenSSH per-connection server daemon (4.153.228.146:58332). Jan 17 00:24:08.785497 sshd[6498]: Accepted publickey for core from 4.153.228.146 port 58332 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:24:08.788186 sshd[6498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:24:08.793452 systemd-logind[2080]: New session 20 of user core. Jan 17 00:24:08.799689 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:24:09.426460 sshd[6498]: pam_unix(sshd:session): session closed for user core Jan 17 00:24:09.432844 systemd[1]: sshd@19-172.31.29.247:22-4.153.228.146:58332.service: Deactivated successfully. Jan 17 00:24:09.434302 systemd-logind[2080]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:24:09.445589 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:24:09.447571 systemd-logind[2080]: Removed session 20. Jan 17 00:24:11.436940 kubelet[3360]: E0117 00:24:11.436890 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64d946f8bb-fs6r2" podUID="6e903c9f-05d5-45fd-9d78-2d7516aa0977" Jan 17 00:24:12.440349 kubelet[3360]: E0117 00:24:12.440288 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hbb8z" podUID="d7198563-8b4e-4b52-ad88-2f9e6d09e79c" Jan 17 00:24:13.126381 systemd[1]: run-containerd-runc-k8s.io-2a9ac62098cbe44056cc962e8390c9b6530a74529de5c67e5db24a87f202ebf8-runc.pBoB4s.mount: Deactivated successfully. Jan 17 00:24:13.436876 kubelet[3360]: E0117 00:24:13.435678 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d9c5b87-7zrtb" podUID="2207401f-e738-47bd-8283-8eef3cbcb7c1" Jan 17 00:24:13.436876 kubelet[3360]: E0117 00:24:13.436439 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-22ww4" podUID="804c4956-a77e-4057-9db7-9d50191156a3" Jan 17 00:24:14.436915 kubelet[3360]: E0117 00:24:14.436849 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d9c5b87-h9bhg" podUID="19297f6f-5ccc-4eab-996b-36acef548d9c" Jan 17 00:24:14.521814 systemd[1]: Started sshd@20-172.31.29.247:22-4.153.228.146:40180.service - OpenSSH per-connection server daemon (4.153.228.146:40180). Jan 17 00:24:15.069185 sshd[6536]: Accepted publickey for core from 4.153.228.146 port 40180 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:24:15.077258 sshd[6536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:24:15.089240 systemd-logind[2080]: New session 21 of user core. Jan 17 00:24:15.094342 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:24:15.759311 sshd[6536]: pam_unix(sshd:session): session closed for user core Jan 17 00:24:15.768282 systemd-logind[2080]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:24:15.769810 systemd[1]: sshd@20-172.31.29.247:22-4.153.228.146:40180.service: Deactivated successfully. Jan 17 00:24:15.785741 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:24:15.790205 systemd-logind[2080]: Removed session 21. Jan 17 00:24:19.445706 kubelet[3360]: E0117 00:24:19.445653 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54bbb49cd4-pb4fm" podUID="2c85088d-5853-486f-a2a6-a1b33d923ebd" Jan 17 00:24:20.849425 systemd[1]: Started sshd@21-172.31.29.247:22-4.153.228.146:40182.service - OpenSSH per-connection server daemon (4.153.228.146:40182). Jan 17 00:24:21.387913 sshd[6550]: Accepted publickey for core from 4.153.228.146 port 40182 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:24:21.390142 sshd[6550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:24:21.395716 systemd-logind[2080]: New session 22 of user core. Jan 17 00:24:21.399359 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:24:21.857031 sshd[6550]: pam_unix(sshd:session): session closed for user core Jan 17 00:24:21.871556 systemd[1]: sshd@21-172.31.29.247:22-4.153.228.146:40182.service: Deactivated successfully. Jan 17 00:24:21.881225 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:24:21.884521 systemd-logind[2080]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:24:21.887763 systemd-logind[2080]: Removed session 22. Jan 17 00:24:24.436900 kubelet[3360]: E0117 00:24:24.436467 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64d946f8bb-fs6r2" podUID="6e903c9f-05d5-45fd-9d78-2d7516aa0977" Jan 17 00:24:25.433854 kubelet[3360]: E0117 00:24:25.433690 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d9c5b87-h9bhg" podUID="19297f6f-5ccc-4eab-996b-36acef548d9c" Jan 17 00:24:25.483064 kubelet[3360]: E0117 00:24:25.482406 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hbb8z" podUID="d7198563-8b4e-4b52-ad88-2f9e6d09e79c" Jan 17 00:24:26.953830 systemd[1]: Started sshd@22-172.31.29.247:22-4.153.228.146:49230.service - OpenSSH per-connection server daemon (4.153.228.146:49230). Jan 17 00:24:27.516807 sshd[6564]: Accepted publickey for core from 4.153.228.146 port 49230 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:24:27.528300 sshd[6564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:24:27.534382 systemd-logind[2080]: New session 23 of user core. Jan 17 00:24:27.542226 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 00:24:28.386718 sshd[6564]: pam_unix(sshd:session): session closed for user core Jan 17 00:24:28.393291 systemd-logind[2080]: Session 23 logged out. Waiting for processes to exit. Jan 17 00:24:28.393866 systemd[1]: sshd@22-172.31.29.247:22-4.153.228.146:49230.service: Deactivated successfully. Jan 17 00:24:28.398482 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 00:24:28.400572 systemd-logind[2080]: Removed session 23. Jan 17 00:24:28.435767 kubelet[3360]: E0117 00:24:28.435725 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-22ww4" podUID="804c4956-a77e-4057-9db7-9d50191156a3" Jan 17 00:24:28.436700 kubelet[3360]: E0117 00:24:28.435923 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d9c5b87-7zrtb" podUID="2207401f-e738-47bd-8283-8eef3cbcb7c1" Jan 17 00:24:32.442150 kubelet[3360]: E0117 00:24:32.439348 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54bbb49cd4-pb4fm" podUID="2c85088d-5853-486f-a2a6-a1b33d923ebd" Jan 17 00:24:33.476483 systemd[1]: Started sshd@23-172.31.29.247:22-4.153.228.146:49238.service - OpenSSH per-connection server daemon (4.153.228.146:49238). Jan 17 00:24:34.006080 sshd[6580]: Accepted publickey for core from 4.153.228.146 port 49238 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:24:34.010885 sshd[6580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:24:34.030160 systemd-logind[2080]: New session 24 of user core. Jan 17 00:24:34.034110 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 00:24:34.694184 sshd[6580]: pam_unix(sshd:session): session closed for user core Jan 17 00:24:34.700547 systemd[1]: sshd@23-172.31.29.247:22-4.153.228.146:49238.service: Deactivated successfully. Jan 17 00:24:34.703873 systemd-logind[2080]: Session 24 logged out. Waiting for processes to exit. Jan 17 00:24:34.703969 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 00:24:34.707283 systemd-logind[2080]: Removed session 24. Jan 17 00:24:36.438081 kubelet[3360]: E0117 00:24:36.437986 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64d946f8bb-fs6r2" podUID="6e903c9f-05d5-45fd-9d78-2d7516aa0977" Jan 17 00:24:36.439150 kubelet[3360]: E0117 00:24:36.439096 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hbb8z" podUID="d7198563-8b4e-4b52-ad88-2f9e6d09e79c" Jan 17 00:24:39.434475 kubelet[3360]: E0117 00:24:39.434409 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d9c5b87-7zrtb" podUID="2207401f-e738-47bd-8283-8eef3cbcb7c1" Jan 17 00:24:39.786366 systemd[1]: Started sshd@24-172.31.29.247:22-4.153.228.146:52064.service - OpenSSH per-connection server daemon (4.153.228.146:52064). Jan 17 00:24:40.316729 sshd[6602]: Accepted publickey for core from 4.153.228.146 port 52064 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:24:40.319197 sshd[6602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:24:40.339136 systemd-logind[2080]: New session 25 of user core. Jan 17 00:24:40.344449 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 00:24:40.438358 containerd[2105]: time="2026-01-17T00:24:40.436538525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:24:40.705915 containerd[2105]: time="2026-01-17T00:24:40.705850828Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:24:40.710501 containerd[2105]: time="2026-01-17T00:24:40.710229947Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:24:40.710501 containerd[2105]: time="2026-01-17T00:24:40.710345463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:24:40.712968 kubelet[3360]: E0117 00:24:40.711069 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:24:40.712968 kubelet[3360]: E0117 00:24:40.711152 3360 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:24:40.715084 kubelet[3360]: E0117 00:24:40.714924 3360 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xxc9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d8d9c5b87-h9bhg_calico-apiserver(19297f6f-5ccc-4eab-996b-36acef548d9c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:24:40.716234 kubelet[3360]: E0117 00:24:40.716183 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d9c5b87-h9bhg" podUID="19297f6f-5ccc-4eab-996b-36acef548d9c" Jan 17 00:24:40.987241 sshd[6602]: pam_unix(sshd:session): session closed for user core Jan 17 00:24:40.996766 systemd-logind[2080]: Session 25 logged out. Waiting for processes to exit. Jan 17 00:24:40.997710 systemd[1]: sshd@24-172.31.29.247:22-4.153.228.146:52064.service: Deactivated successfully. Jan 17 00:24:41.009650 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 00:24:41.016660 systemd-logind[2080]: Removed session 25. Jan 17 00:24:43.435078 containerd[2105]: time="2026-01-17T00:24:43.434394054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:24:43.746534 containerd[2105]: time="2026-01-17T00:24:43.746389676Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:24:43.748663 containerd[2105]: time="2026-01-17T00:24:43.748468081Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:24:43.748810 containerd[2105]: time="2026-01-17T00:24:43.748719784Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:24:43.748991 kubelet[3360]: E0117 00:24:43.748933 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:24:43.751366 kubelet[3360]: E0117 00:24:43.749002 3360 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:24:43.751366 kubelet[3360]: E0117 00:24:43.749610 3360 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-27bk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-22ww4_calico-system(804c4956-a77e-4057-9db7-9d50191156a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:24:43.751366 kubelet[3360]: E0117 00:24:43.750856 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-22ww4" podUID="804c4956-a77e-4057-9db7-9d50191156a3" Jan 17 00:24:47.168080 systemd-journald[1577]: Under memory pressure, flushing caches. Jan 17 00:24:47.166195 systemd-resolved[1998]: Under memory pressure, flushing caches. Jan 17 00:24:47.166243 systemd-resolved[1998]: Flushed all caches. Jan 17 00:24:47.433494 containerd[2105]: time="2026-01-17T00:24:47.433030835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:24:47.823329 containerd[2105]: time="2026-01-17T00:24:47.823118521Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:24:47.825401 containerd[2105]: time="2026-01-17T00:24:47.825272593Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:24:47.825401 containerd[2105]: time="2026-01-17T00:24:47.825345855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:24:47.825606 kubelet[3360]: E0117 00:24:47.825560 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:24:47.826118 kubelet[3360]: E0117 00:24:47.825618 3360 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:24:47.826118 kubelet[3360]: E0117 00:24:47.825915 3360 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m8c5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-54bbb49cd4-pb4fm_calico-system(2c85088d-5853-486f-a2a6-a1b33d923ebd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:24:47.826422 containerd[2105]: time="2026-01-17T00:24:47.826250094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:24:47.828003 kubelet[3360]: E0117 00:24:47.827946 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54bbb49cd4-pb4fm" podUID="2c85088d-5853-486f-a2a6-a1b33d923ebd" Jan 17 00:24:48.136185 containerd[2105]: time="2026-01-17T00:24:48.136025994Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:24:48.138539 containerd[2105]: time="2026-01-17T00:24:48.138181021Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:24:48.138539 containerd[2105]: time="2026-01-17T00:24:48.138257116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:24:48.138753 kubelet[3360]: E0117 00:24:48.138637 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:24:48.138753 kubelet[3360]: E0117 00:24:48.138691 3360 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:24:48.138911 kubelet[3360]: E0117 00:24:48.138846 3360 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d6ba05377925445eb2d7612d02a08bcf,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jzjv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64d946f8bb-fs6r2_calico-system(6e903c9f-05d5-45fd-9d78-2d7516aa0977): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:24:48.141244 containerd[2105]: time="2026-01-17T00:24:48.141209798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:24:48.535419 containerd[2105]: time="2026-01-17T00:24:48.535329207Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:24:48.537474 containerd[2105]: time="2026-01-17T00:24:48.537410874Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:24:48.537672 containerd[2105]: time="2026-01-17T00:24:48.537457575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:24:48.537728 kubelet[3360]: E0117 00:24:48.537667 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:24:48.537798 kubelet[3360]: E0117 00:24:48.537721 3360 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:24:48.537916 kubelet[3360]: E0117 00:24:48.537862 3360 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jzjv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64d946f8bb-fs6r2_calico-system(6e903c9f-05d5-45fd-9d78-2d7516aa0977): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:24:48.539148 kubelet[3360]: E0117 00:24:48.539104 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64d946f8bb-fs6r2" podUID="6e903c9f-05d5-45fd-9d78-2d7516aa0977" Jan 17 00:24:50.434029 containerd[2105]: time="2026-01-17T00:24:50.433888670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:24:50.732866 containerd[2105]: time="2026-01-17T00:24:50.732734993Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:24:50.736093 containerd[2105]: time="2026-01-17T00:24:50.735973877Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:24:50.736234 containerd[2105]: time="2026-01-17T00:24:50.735991419Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:24:50.736327 kubelet[3360]: E0117 00:24:50.736277 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:24:50.736692 kubelet[3360]: E0117 00:24:50.736326 3360 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:24:50.736692 kubelet[3360]: E0117 00:24:50.736467 3360 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gh4hl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-d8d9c5b87-7zrtb_calico-apiserver(2207401f-e738-47bd-8283-8eef3cbcb7c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:24:50.737707 kubelet[3360]: E0117 00:24:50.737660 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d9c5b87-7zrtb" podUID="2207401f-e738-47bd-8283-8eef3cbcb7c1" Jan 17 00:24:51.434105 containerd[2105]: time="2026-01-17T00:24:51.434034611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:24:51.701186 containerd[2105]: time="2026-01-17T00:24:51.701020371Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:24:51.703609 containerd[2105]: time="2026-01-17T00:24:51.703516534Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:24:51.704119 containerd[2105]: time="2026-01-17T00:24:51.703541566Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:24:51.704211 kubelet[3360]: E0117 00:24:51.704075 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:24:51.704211 kubelet[3360]: E0117 00:24:51.704131 3360 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:24:51.704372 kubelet[3360]: E0117 00:24:51.704273 3360 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tlttd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hbb8z_calico-system(d7198563-8b4e-4b52-ad88-2f9e6d09e79c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:24:51.712266 containerd[2105]: time="2026-01-17T00:24:51.712211297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:24:52.009384 containerd[2105]: time="2026-01-17T00:24:52.009220751Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:24:52.011384 containerd[2105]: time="2026-01-17T00:24:52.011322443Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:24:52.011512 containerd[2105]: time="2026-01-17T00:24:52.011363844Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:24:52.011656 kubelet[3360]: E0117 00:24:52.011608 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:24:52.012161 kubelet[3360]: E0117 00:24:52.011665 3360 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:24:52.012161 kubelet[3360]: E0117 00:24:52.011917 3360 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tlttd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hbb8z_calico-system(d7198563-8b4e-4b52-ad88-2f9e6d09e79c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:24:52.013150 kubelet[3360]: E0117 00:24:52.013105 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hbb8z" podUID="d7198563-8b4e-4b52-ad88-2f9e6d09e79c" Jan 17 00:24:53.433880 kubelet[3360]: E0117 00:24:53.433811 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d9c5b87-h9bhg" podUID="19297f6f-5ccc-4eab-996b-36acef548d9c" Jan 17 00:24:55.876320 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c7b38b476cdf968c70a697dab4213e2c17b6ae2e8ea4edeaa206b9c5d5d68e6-rootfs.mount: Deactivated successfully. Jan 17 00:24:55.915545 containerd[2105]: time="2026-01-17T00:24:55.881603977Z" level=info msg="shim disconnected" id=9c7b38b476cdf968c70a697dab4213e2c17b6ae2e8ea4edeaa206b9c5d5d68e6 namespace=k8s.io Jan 17 00:24:55.929940 containerd[2105]: time="2026-01-17T00:24:55.929878106Z" level=warning msg="cleaning up after shim disconnected" id=9c7b38b476cdf968c70a697dab4213e2c17b6ae2e8ea4edeaa206b9c5d5d68e6 namespace=k8s.io Jan 17 00:24:55.929940 containerd[2105]: time="2026-01-17T00:24:55.929927554Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:24:56.246967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17e15b402ec85579f1b9f4a4f7dbfa93d519e94f6e07d8ff19a951f3b026aab3-rootfs.mount: Deactivated successfully. Jan 17 00:24:56.257235 containerd[2105]: time="2026-01-17T00:24:56.257166596Z" level=info msg="shim disconnected" id=17e15b402ec85579f1b9f4a4f7dbfa93d519e94f6e07d8ff19a951f3b026aab3 namespace=k8s.io Jan 17 00:24:56.257235 containerd[2105]: time="2026-01-17T00:24:56.257219896Z" level=warning msg="cleaning up after shim disconnected" id=17e15b402ec85579f1b9f4a4f7dbfa93d519e94f6e07d8ff19a951f3b026aab3 namespace=k8s.io Jan 17 00:24:56.257235 containerd[2105]: time="2026-01-17T00:24:56.257228639Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:24:56.275098 containerd[2105]: time="2026-01-17T00:24:56.274394675Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:24:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:24:56.677948 kubelet[3360]: I0117 00:24:56.677907 3360 scope.go:117] "RemoveContainer" containerID="17e15b402ec85579f1b9f4a4f7dbfa93d519e94f6e07d8ff19a951f3b026aab3" Jan 17 00:24:56.678864 kubelet[3360]: I0117 00:24:56.678807 3360 scope.go:117] "RemoveContainer" containerID="9c7b38b476cdf968c70a697dab4213e2c17b6ae2e8ea4edeaa206b9c5d5d68e6" Jan 17 00:24:56.697817 containerd[2105]: time="2026-01-17T00:24:56.697782784Z" level=info msg="CreateContainer within sandbox \"6767d4553e94249905bcce7bbda06806b7b68fd87650dd49495b062bcc969341\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 17 00:24:56.698373 containerd[2105]: time="2026-01-17T00:24:56.697854398Z" level=info msg="CreateContainer within sandbox \"dc330af4c8c38670d2520a4dafa3682938f315eeee7d5a9de5aa21935f4cab54\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 17 00:24:56.730478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4263043313.mount: Deactivated successfully. Jan 17 00:24:56.740185 containerd[2105]: time="2026-01-17T00:24:56.740126606Z" level=info msg="CreateContainer within sandbox \"6767d4553e94249905bcce7bbda06806b7b68fd87650dd49495b062bcc969341\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"92d29fe0274ad320a7292406ff03a4222698a8555d88381138320e7bd8f9e0c0\"" Jan 17 00:24:56.742073 containerd[2105]: time="2026-01-17T00:24:56.740998183Z" level=info msg="StartContainer for \"92d29fe0274ad320a7292406ff03a4222698a8555d88381138320e7bd8f9e0c0\"" Jan 17 00:24:56.817626 containerd[2105]: time="2026-01-17T00:24:56.817565503Z" level=info msg="StartContainer for \"92d29fe0274ad320a7292406ff03a4222698a8555d88381138320e7bd8f9e0c0\" returns successfully" Jan 17 00:24:56.987777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2028035055.mount: Deactivated successfully. Jan 17 00:24:57.004088 containerd[2105]: time="2026-01-17T00:24:57.002964636Z" level=info msg="CreateContainer within sandbox \"dc330af4c8c38670d2520a4dafa3682938f315eeee7d5a9de5aa21935f4cab54\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"5acef481eeb4e1365bd515ca7a8f37b5075a8bbef6c0e34579d8238ea87ec630\"" Jan 17 00:24:57.004088 containerd[2105]: time="2026-01-17T00:24:57.003705055Z" level=info msg="StartContainer for \"5acef481eeb4e1365bd515ca7a8f37b5075a8bbef6c0e34579d8238ea87ec630\"" Jan 17 00:24:57.107442 containerd[2105]: time="2026-01-17T00:24:57.107388710Z" level=info msg="StartContainer for \"5acef481eeb4e1365bd515ca7a8f37b5075a8bbef6c0e34579d8238ea87ec630\" returns successfully" Jan 17 00:24:58.432811 kubelet[3360]: E0117 00:24:58.432752 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-22ww4" podUID="804c4956-a77e-4057-9db7-9d50191156a3" Jan 17 00:24:59.433475 kubelet[3360]: E0117 00:24:59.433431 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54bbb49cd4-pb4fm" podUID="2c85088d-5853-486f-a2a6-a1b33d923ebd" Jan 17 00:25:00.948994 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68927b667f5f5b36f641178435eee463d1973824fb088b35f3ae246c2ccab9bd-rootfs.mount: Deactivated successfully. Jan 17 00:25:00.975855 containerd[2105]: time="2026-01-17T00:25:00.975453586Z" level=info msg="shim disconnected" id=68927b667f5f5b36f641178435eee463d1973824fb088b35f3ae246c2ccab9bd namespace=k8s.io Jan 17 00:25:00.975855 containerd[2105]: time="2026-01-17T00:25:00.975681738Z" level=warning msg="cleaning up after shim disconnected" id=68927b667f5f5b36f641178435eee463d1973824fb088b35f3ae246c2ccab9bd namespace=k8s.io Jan 17 00:25:00.975855 containerd[2105]: time="2026-01-17T00:25:00.975692523Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:25:00.993382 containerd[2105]: time="2026-01-17T00:25:00.993309776Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:25:00Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:25:01.732609 kubelet[3360]: I0117 00:25:01.732212 3360 scope.go:117] "RemoveContainer" containerID="68927b667f5f5b36f641178435eee463d1973824fb088b35f3ae246c2ccab9bd" Jan 17 00:25:01.766833 containerd[2105]: time="2026-01-17T00:25:01.766723203Z" level=info msg="CreateContainer within sandbox \"da560e845827b24ab20ad4fae82e1427080e3d506e65a048f05dbf18966244b1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 17 00:25:01.878759 containerd[2105]: time="2026-01-17T00:25:01.878700383Z" level=info msg="CreateContainer within sandbox \"da560e845827b24ab20ad4fae82e1427080e3d506e65a048f05dbf18966244b1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"a2f33e786b195c39640f743f67fbd411db6b39030a504599fe82202dda35f00a\"" Jan 17 00:25:01.879386 containerd[2105]: time="2026-01-17T00:25:01.879301468Z" level=info msg="StartContainer for \"a2f33e786b195c39640f743f67fbd411db6b39030a504599fe82202dda35f00a\"" Jan 17 00:25:02.540509 containerd[2105]: time="2026-01-17T00:25:02.540428553Z" level=info msg="StartContainer for \"a2f33e786b195c39640f743f67fbd411db6b39030a504599fe82202dda35f00a\" returns successfully" Jan 17 00:25:03.438141 kubelet[3360]: E0117 00:25:03.437983 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64d946f8bb-fs6r2" podUID="6e903c9f-05d5-45fd-9d78-2d7516aa0977" Jan 17 00:25:04.349313 kubelet[3360]: E0117 00:25:04.349009 3360 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-247?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 17 00:25:04.435814 kubelet[3360]: E0117 00:25:04.435332 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d9c5b87-h9bhg" podUID="19297f6f-5ccc-4eab-996b-36acef548d9c" Jan 17 00:25:06.434010 kubelet[3360]: E0117 00:25:06.433512 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d8d9c5b87-7zrtb" podUID="2207401f-e738-47bd-8283-8eef3cbcb7c1" Jan 17 00:25:06.434612 kubelet[3360]: E0117 00:25:06.434450 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hbb8z" podUID="d7198563-8b4e-4b52-ad88-2f9e6d09e79c"