Jan 17 00:26:42.933132 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:26:42.933175 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:26:42.933196 kernel: BIOS-provided physical RAM map: Jan 17 00:26:42.933208 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 17 00:26:42.933219 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jan 17 00:26:42.933231 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Jan 17 00:26:42.933245 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Jan 17 00:26:42.933257 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 17 00:26:42.933269 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 17 00:26:42.933284 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 17 00:26:42.933296 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 17 00:26:42.933309 kernel: NX (Execute Disable) protection: active Jan 17 00:26:42.933320 kernel: APIC: Static calls initialized Jan 17 00:26:42.933333 kernel: efi: EFI v2.7 by EDK II Jan 17 00:26:42.933348 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Jan 17 00:26:42.933365 kernel: SMBIOS 2.7 present. Jan 17 00:26:42.933379 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 17 00:26:42.933392 kernel: Hypervisor detected: KVM Jan 17 00:26:42.933405 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:26:42.933418 kernel: kvm-clock: using sched offset of 3748581990 cycles Jan 17 00:26:42.933433 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:26:42.933446 kernel: tsc: Detected 2500.004 MHz processor Jan 17 00:26:42.933460 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:26:42.933474 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:26:42.933488 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 17 00:26:42.933505 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 17 00:26:42.933518 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:26:42.933532 kernel: Using GB pages for direct mapping Jan 17 00:26:42.933545 kernel: Secure boot disabled Jan 17 00:26:42.933558 kernel: ACPI: Early table checksum verification disabled Jan 17 00:26:42.933572 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jan 17 00:26:42.933585 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jan 17 00:26:42.933597 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 17 00:26:42.933609 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 17 00:26:42.933624 kernel: ACPI: FACS 0x00000000789D0000 000040 Jan 17 00:26:42.933636 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 17 00:26:42.933648 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 17 00:26:42.933661 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 17 00:26:42.933673 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 17 00:26:42.933686 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 17 00:26:42.933705 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 17 00:26:42.933722 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 17 00:26:42.933735 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jan 17 00:26:42.933749 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jan 17 00:26:42.933762 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jan 17 00:26:42.933775 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jan 17 00:26:42.933789 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jan 17 00:26:42.933807 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jan 17 00:26:42.933820 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jan 17 00:26:42.933834 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jan 17 00:26:42.933847 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jan 17 00:26:42.933860 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jan 17 00:26:42.933873 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jan 17 00:26:42.933886 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jan 17 00:26:42.933900 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 00:26:42.933912 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 00:26:42.933926 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 17 00:26:42.933942 kernel: NUMA: Initialized distance table, cnt=1 Jan 17 00:26:42.933955 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Jan 17 00:26:42.933969 kernel: Zone ranges: Jan 17 00:26:42.933982 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:26:42.933995 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jan 17 00:26:42.934008 kernel: Normal empty Jan 17 00:26:42.934021 kernel: Movable zone start for each node Jan 17 00:26:42.934034 kernel: Early memory node ranges Jan 17 00:26:42.934059 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 17 00:26:42.934076 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jan 17 00:26:42.934089 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jan 17 00:26:42.934103 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jan 17 00:26:42.934116 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:26:42.934129 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 17 00:26:42.934143 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 17 00:26:42.934156 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jan 17 00:26:42.934169 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 17 00:26:42.934183 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:26:42.934200 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 17 00:26:42.934213 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:26:42.934226 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:26:42.934240 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:26:42.934254 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:26:42.934267 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:26:42.934281 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 00:26:42.934294 kernel: TSC deadline timer available Jan 17 00:26:42.934308 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 00:26:42.934325 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 00:26:42.934338 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jan 17 00:26:42.934351 kernel: Booting paravirtualized kernel on KVM Jan 17 00:26:42.934365 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:26:42.934378 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 00:26:42.934392 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 17 00:26:42.934405 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 17 00:26:42.934418 kernel: pcpu-alloc: [0] 0 1 Jan 17 00:26:42.934431 kernel: kvm-guest: PV spinlocks enabled Jan 17 00:26:42.934445 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 00:26:42.934463 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:26:42.934477 kernel: random: crng init done Jan 17 00:26:42.934490 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:26:42.934503 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 00:26:42.934516 kernel: Fallback order for Node 0: 0 Jan 17 00:26:42.934529 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Jan 17 00:26:42.934543 kernel: Policy zone: DMA32 Jan 17 00:26:42.934557 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:26:42.934574 kernel: Memory: 1874628K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 162916K reserved, 0K cma-reserved) Jan 17 00:26:42.934587 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:26:42.934600 kernel: Kernel/User page tables isolation: enabled Jan 17 00:26:42.934614 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:26:42.934627 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:26:42.934640 kernel: Dynamic Preempt: voluntary Jan 17 00:26:42.934654 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:26:42.934668 kernel: rcu: RCU event tracing is enabled. Jan 17 00:26:42.934685 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:26:42.934698 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:26:42.934712 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:26:42.934725 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:26:42.936279 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:26:42.936307 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:26:42.936323 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 00:26:42.936338 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:26:42.936371 kernel: Console: colour dummy device 80x25 Jan 17 00:26:42.936386 kernel: printk: console [tty0] enabled Jan 17 00:26:42.936402 kernel: printk: console [ttyS0] enabled Jan 17 00:26:42.936417 kernel: ACPI: Core revision 20230628 Jan 17 00:26:42.936433 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 17 00:26:42.936451 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:26:42.936467 kernel: x2apic enabled Jan 17 00:26:42.936482 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:26:42.936498 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Jan 17 00:26:42.936516 kernel: Calibrating delay loop (skipped) preset value.. 5000.00 BogoMIPS (lpj=2500004) Jan 17 00:26:42.936532 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 17 00:26:42.936547 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 17 00:26:42.936563 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:26:42.936578 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 00:26:42.936593 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:26:42.936609 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 17 00:26:42.936624 kernel: RETBleed: Vulnerable Jan 17 00:26:42.936640 kernel: Speculative Store Bypass: Vulnerable Jan 17 00:26:42.936655 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:26:42.936668 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:26:42.936683 kernel: GDS: Unknown: Dependent on hypervisor status Jan 17 00:26:42.936695 kernel: active return thunk: its_return_thunk Jan 17 00:26:42.936708 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 17 00:26:42.936721 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:26:42.936735 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:26:42.936748 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:26:42.936761 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 17 00:26:42.936774 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 17 00:26:42.936908 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 17 00:26:42.936922 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 17 00:26:42.936937 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 17 00:26:42.936957 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 17 00:26:42.936973 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:26:42.936988 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 17 00:26:42.937004 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 17 00:26:42.937020 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 17 00:26:42.937035 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 17 00:26:42.937067 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 17 00:26:42.937082 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 17 00:26:42.937099 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 17 00:26:42.937116 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:26:42.937133 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:26:42.937153 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:26:42.937170 kernel: landlock: Up and running. Jan 17 00:26:42.937187 kernel: SELinux: Initializing. Jan 17 00:26:42.937203 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 00:26:42.937220 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 00:26:42.937236 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 17 00:26:42.937251 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:26:42.937266 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:26:42.937282 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:26:42.937300 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 17 00:26:42.937322 kernel: signal: max sigframe size: 3632 Jan 17 00:26:42.937339 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:26:42.937356 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:26:42.937372 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:26:42.937389 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:26:42.937406 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:26:42.937423 kernel: .... node #0, CPUs: #1 Jan 17 00:26:42.937442 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 17 00:26:42.937460 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 17 00:26:42.937477 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:26:42.937494 kernel: smpboot: Max logical packages: 1 Jan 17 00:26:42.937511 kernel: smpboot: Total of 2 processors activated (10000.01 BogoMIPS) Jan 17 00:26:42.937528 kernel: devtmpfs: initialized Jan 17 00:26:42.937545 kernel: x86/mm: Memory block size: 128MB Jan 17 00:26:42.937562 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jan 17 00:26:42.937578 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:26:42.937595 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:26:42.937612 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:26:42.937633 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:26:42.937650 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:26:42.937668 kernel: audit: type=2000 audit(1768609603.948:1): state=initialized audit_enabled=0 res=1 Jan 17 00:26:42.937684 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:26:42.937702 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:26:42.937719 kernel: cpuidle: using governor menu Jan 17 00:26:42.937736 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:26:42.937753 kernel: dca service started, version 1.12.1 Jan 17 00:26:42.937769 kernel: PCI: Using configuration type 1 for base access Jan 17 00:26:42.937790 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:26:42.937807 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:26:42.937823 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:26:42.937840 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:26:42.937857 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:26:42.937873 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:26:42.937887 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:26:42.937901 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:26:42.937916 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 17 00:26:42.937934 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:26:42.937949 kernel: ACPI: Interpreter enabled Jan 17 00:26:42.937961 kernel: ACPI: PM: (supports S0 S5) Jan 17 00:26:42.937980 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:26:42.937999 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:26:42.938020 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 00:26:42.938040 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 17 00:26:42.939998 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:26:42.940248 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:26:42.940401 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 00:26:42.940536 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 00:26:42.940555 kernel: acpiphp: Slot [3] registered Jan 17 00:26:42.940572 kernel: acpiphp: Slot [4] registered Jan 17 00:26:42.940587 kernel: acpiphp: Slot [5] registered Jan 17 00:26:42.940602 kernel: acpiphp: Slot [6] registered Jan 17 00:26:42.940618 kernel: acpiphp: Slot [7] registered Jan 17 00:26:42.940637 kernel: acpiphp: Slot [8] registered Jan 17 00:26:42.940653 kernel: acpiphp: Slot [9] registered Jan 17 00:26:42.940669 kernel: acpiphp: Slot [10] registered Jan 17 00:26:42.940684 kernel: acpiphp: Slot [11] registered Jan 17 00:26:42.940700 kernel: acpiphp: Slot [12] registered Jan 17 00:26:42.940715 kernel: acpiphp: Slot [13] registered Jan 17 00:26:42.940731 kernel: acpiphp: Slot [14] registered Jan 17 00:26:42.940746 kernel: acpiphp: Slot [15] registered Jan 17 00:26:42.940761 kernel: acpiphp: Slot [16] registered Jan 17 00:26:42.940786 kernel: acpiphp: Slot [17] registered Jan 17 00:26:42.940803 kernel: acpiphp: Slot [18] registered Jan 17 00:26:42.940817 kernel: acpiphp: Slot [19] registered Jan 17 00:26:42.940830 kernel: acpiphp: Slot [20] registered Jan 17 00:26:42.940844 kernel: acpiphp: Slot [21] registered Jan 17 00:26:42.940857 kernel: acpiphp: Slot [22] registered Jan 17 00:26:42.940871 kernel: acpiphp: Slot [23] registered Jan 17 00:26:42.940886 kernel: acpiphp: Slot [24] registered Jan 17 00:26:42.940901 kernel: acpiphp: Slot [25] registered Jan 17 00:26:42.940917 kernel: acpiphp: Slot [26] registered Jan 17 00:26:42.940936 kernel: acpiphp: Slot [27] registered Jan 17 00:26:42.940951 kernel: acpiphp: Slot [28] registered Jan 17 00:26:42.940966 kernel: acpiphp: Slot [29] registered Jan 17 00:26:42.940982 kernel: acpiphp: Slot [30] registered Jan 17 00:26:42.940997 kernel: acpiphp: Slot [31] registered Jan 17 00:26:42.941012 kernel: PCI host bridge to bus 0000:00 Jan 17 00:26:42.941175 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:26:42.941301 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:26:42.941429 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:26:42.941551 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 17 00:26:42.941672 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jan 17 00:26:42.941793 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:26:42.941952 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 00:26:42.942126 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 17 00:26:42.942273 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 17 00:26:42.942416 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 17 00:26:42.942552 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 17 00:26:42.942687 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 17 00:26:42.942820 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 17 00:26:42.942952 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 17 00:26:42.944606 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 17 00:26:42.944766 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 17 00:26:42.944929 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 17 00:26:42.945158 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Jan 17 00:26:42.945289 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 17 00:26:42.945414 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Jan 17 00:26:42.945539 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 00:26:42.945671 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 17 00:26:42.945802 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Jan 17 00:26:42.945935 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 17 00:26:42.946095 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Jan 17 00:26:42.946125 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:26:42.946155 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:26:42.946176 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:26:42.946193 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:26:42.946209 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 00:26:42.946231 kernel: iommu: Default domain type: Translated Jan 17 00:26:42.946247 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:26:42.946263 kernel: efivars: Registered efivars operations Jan 17 00:26:42.946280 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:26:42.946296 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:26:42.946313 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jan 17 00:26:42.946328 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jan 17 00:26:42.946468 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 17 00:26:42.946606 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 17 00:26:42.946739 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 00:26:42.946759 kernel: vgaarb: loaded Jan 17 00:26:42.946776 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 17 00:26:42.946792 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 17 00:26:42.946809 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:26:42.946825 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:26:42.946841 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:26:42.946857 kernel: pnp: PnP ACPI init Jan 17 00:26:42.946876 kernel: pnp: PnP ACPI: found 5 devices Jan 17 00:26:42.946893 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:26:42.946909 kernel: NET: Registered PF_INET protocol family Jan 17 00:26:42.946925 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:26:42.946942 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 00:26:42.946958 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:26:42.946974 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:26:42.946991 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 00:26:42.947007 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 00:26:42.947027 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 00:26:42.947043 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 00:26:42.948103 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:26:42.948121 kernel: NET: Registered PF_XDP protocol family Jan 17 00:26:42.948295 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:26:42.948441 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:26:42.948568 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:26:42.948689 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 17 00:26:42.948821 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jan 17 00:26:42.948972 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 00:26:42.948992 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:26:42.949007 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 00:26:42.949022 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Jan 17 00:26:42.949036 kernel: clocksource: Switched to clocksource tsc Jan 17 00:26:42.949109 kernel: Initialise system trusted keyrings Jan 17 00:26:42.949125 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 00:26:42.949139 kernel: Key type asymmetric registered Jan 17 00:26:42.949158 kernel: Asymmetric key parser 'x509' registered Jan 17 00:26:42.949173 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:26:42.949187 kernel: io scheduler mq-deadline registered Jan 17 00:26:42.949202 kernel: io scheduler kyber registered Jan 17 00:26:42.949217 kernel: io scheduler bfq registered Jan 17 00:26:42.949231 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:26:42.949245 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:26:42.949259 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:26:42.949274 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:26:42.949292 kernel: i8042: Warning: Keylock active Jan 17 00:26:42.949306 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:26:42.949320 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:26:42.949468 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 17 00:26:42.949593 kernel: rtc_cmos 00:00: registered as rtc0 Jan 17 00:26:42.949715 kernel: rtc_cmos 00:00: setting system clock to 2026-01-17T00:26:42 UTC (1768609602) Jan 17 00:26:42.949837 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 17 00:26:42.949855 kernel: intel_pstate: CPU model not supported Jan 17 00:26:42.949874 kernel: efifb: probing for efifb Jan 17 00:26:42.949888 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Jan 17 00:26:42.949904 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jan 17 00:26:42.949919 kernel: efifb: scrolling: redraw Jan 17 00:26:42.949934 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 17 00:26:42.949948 kernel: Console: switching to colour frame buffer device 100x37 Jan 17 00:26:42.949963 kernel: fb0: EFI VGA frame buffer device Jan 17 00:26:42.949978 kernel: pstore: Using crash dump compression: deflate Jan 17 00:26:42.949992 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 00:26:42.950009 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:26:42.950023 kernel: Segment Routing with IPv6 Jan 17 00:26:42.950037 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:26:42.951095 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:26:42.951118 kernel: Key type dns_resolver registered Jan 17 00:26:42.951136 kernel: IPI shorthand broadcast: enabled Jan 17 00:26:42.951182 kernel: sched_clock: Marking stable (457001632, 128863976)->(679594536, -93728928) Jan 17 00:26:42.951203 kernel: registered taskstats version 1 Jan 17 00:26:42.951220 kernel: Loading compiled-in X.509 certificates Jan 17 00:26:42.951240 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:26:42.951257 kernel: Key type .fscrypt registered Jan 17 00:26:42.951274 kernel: Key type fscrypt-provisioning registered Jan 17 00:26:42.951294 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:26:42.951311 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:26:42.951328 kernel: ima: No architecture policies found Jan 17 00:26:42.951345 kernel: clk: Disabling unused clocks Jan 17 00:26:42.951363 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:26:42.951380 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:26:42.951400 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:26:42.951417 kernel: Run /init as init process Jan 17 00:26:42.951434 kernel: with arguments: Jan 17 00:26:42.951451 kernel: /init Jan 17 00:26:42.951468 kernel: with environment: Jan 17 00:26:42.951485 kernel: HOME=/ Jan 17 00:26:42.951501 kernel: TERM=linux Jan 17 00:26:42.951522 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:26:42.951545 systemd[1]: Detected virtualization amazon. Jan 17 00:26:42.951564 systemd[1]: Detected architecture x86-64. Jan 17 00:26:42.951581 systemd[1]: Running in initrd. Jan 17 00:26:42.951598 systemd[1]: No hostname configured, using default hostname. Jan 17 00:26:42.951616 systemd[1]: Hostname set to . Jan 17 00:26:42.951632 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:26:42.951648 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:26:42.951684 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:26:42.951724 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:26:42.951741 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:26:42.951758 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:26:42.951775 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:26:42.951797 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:26:42.951817 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:26:42.951831 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:26:42.951849 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:26:42.951866 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:26:42.951883 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:26:42.951898 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:26:42.951914 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:26:42.951935 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:26:42.951949 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:26:42.951974 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:26:42.951988 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:26:42.952003 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:26:42.952018 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:26:42.952033 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:26:42.952107 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:26:42.952124 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:26:42.952144 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:26:42.952162 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:26:42.952179 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:26:42.952193 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:26:42.952209 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:26:42.952228 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:26:42.952246 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:26:42.952265 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:26:42.952328 systemd-journald[179]: Collecting audit messages is disabled. Jan 17 00:26:42.952369 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:26:42.952388 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:26:42.952413 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:26:42.952432 systemd-journald[179]: Journal started Jan 17 00:26:42.952470 systemd-journald[179]: Runtime Journal (/run/log/journal/ec21075a1ca9af61f6f4c184679bbaf0) is 4.7M, max 38.2M, 33.4M free. Jan 17 00:26:42.952005 systemd-modules-load[180]: Inserted module 'overlay' Jan 17 00:26:42.967078 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:26:42.967650 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:26:42.981334 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:26:42.985952 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:26:42.989142 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:26:42.999041 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:26:43.006179 kernel: Bridge firewalling registered Jan 17 00:26:43.006231 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 17 00:26:43.005189 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:26:43.006140 systemd-modules-load[180]: Inserted module 'br_netfilter' Jan 17 00:26:43.010908 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:26:43.022224 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:26:43.026114 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:26:43.032915 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:26:43.037413 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:26:43.047847 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:26:43.049507 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:26:43.054314 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:26:43.068749 dracut-cmdline[213]: dracut-dracut-053 Jan 17 00:26:43.073560 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:26:43.109631 systemd-resolved[216]: Positive Trust Anchors: Jan 17 00:26:43.109654 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:26:43.109713 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:26:43.118153 systemd-resolved[216]: Defaulting to hostname 'linux'. Jan 17 00:26:43.119659 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:26:43.122406 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:26:43.171088 kernel: SCSI subsystem initialized Jan 17 00:26:43.181087 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:26:43.192086 kernel: iscsi: registered transport (tcp) Jan 17 00:26:43.214495 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:26:43.214585 kernel: QLogic iSCSI HBA Driver Jan 17 00:26:43.254017 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:26:43.261279 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:26:43.288097 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:26:43.288174 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:26:43.289364 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:26:43.332086 kernel: raid6: avx512x4 gen() 15440 MB/s Jan 17 00:26:43.350076 kernel: raid6: avx512x2 gen() 15361 MB/s Jan 17 00:26:43.368084 kernel: raid6: avx512x1 gen() 15411 MB/s Jan 17 00:26:43.386077 kernel: raid6: avx2x4 gen() 15350 MB/s Jan 17 00:26:43.404092 kernel: raid6: avx2x2 gen() 15339 MB/s Jan 17 00:26:43.422733 kernel: raid6: avx2x1 gen() 11573 MB/s Jan 17 00:26:43.422790 kernel: raid6: using algorithm avx512x4 gen() 15440 MB/s Jan 17 00:26:43.441413 kernel: raid6: .... xor() 7643 MB/s, rmw enabled Jan 17 00:26:43.441469 kernel: raid6: using avx512x2 recovery algorithm Jan 17 00:26:43.463082 kernel: xor: automatically using best checksumming function avx Jan 17 00:26:43.626082 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:26:43.636371 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:26:43.641253 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:26:43.662824 systemd-udevd[399]: Using default interface naming scheme 'v255'. Jan 17 00:26:43.668219 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:26:43.676333 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:26:43.695985 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Jan 17 00:26:43.727523 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:26:43.733265 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:26:43.785206 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:26:43.795377 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:26:43.821275 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:26:43.823289 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:26:43.825155 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:26:43.826153 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:26:43.832600 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:26:43.859245 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:26:43.893117 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:26:43.900315 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 17 00:26:43.900599 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 17 00:26:43.913083 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 17 00:26:43.922645 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:26:43.922717 kernel: AES CTR mode by8 optimization enabled Jan 17 00:26:43.928745 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:26:43.929913 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:26:43.937887 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:52:4d:7d:94:7d Jan 17 00:26:43.934308 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:26:43.934903 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:26:43.935247 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:26:43.935942 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:26:43.947534 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:26:43.951330 (udev-worker)[452]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:26:43.954355 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:26:43.954562 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:26:43.962456 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:26:43.971231 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 17 00:26:43.971467 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 17 00:26:43.986116 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 17 00:26:43.991420 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:26:44.000704 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:26:44.000751 kernel: GPT:9289727 != 33554431 Jan 17 00:26:44.000825 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:26:44.000846 kernel: GPT:9289727 != 33554431 Jan 17 00:26:44.000866 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:26:44.000887 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:26:44.009344 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:26:44.023159 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:26:44.063756 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (448) Jan 17 00:26:44.078122 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/nvme0n1p3 scanned by (udev-worker) (458) Jan 17 00:26:44.097167 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 17 00:26:44.123026 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 00:26:44.145733 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 17 00:26:44.151790 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 17 00:26:44.152428 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 17 00:26:44.161297 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:26:44.169036 disk-uuid[630]: Primary Header is updated. Jan 17 00:26:44.169036 disk-uuid[630]: Secondary Entries is updated. Jan 17 00:26:44.169036 disk-uuid[630]: Secondary Header is updated. Jan 17 00:26:44.175078 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:26:44.183094 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:26:44.190037 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:26:45.190663 disk-uuid[631]: The operation has completed successfully. Jan 17 00:26:45.192117 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:26:45.305069 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:26:45.305163 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:26:45.328242 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:26:45.331531 sh[972]: Success Jan 17 00:26:45.345072 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 00:26:45.457314 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:26:45.475742 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:26:45.476978 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:26:45.515064 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:26:45.515138 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:26:45.515154 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:26:45.517121 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:26:45.519500 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:26:45.629074 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 00:26:45.643390 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:26:45.644471 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:26:45.656341 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:26:45.658255 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:26:45.686722 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:26:45.686804 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:26:45.686827 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 00:26:45.694079 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 00:26:45.708783 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:26:45.711072 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:26:45.719588 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:26:45.726262 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:26:45.765514 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:26:45.770287 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:26:45.800585 systemd-networkd[1164]: lo: Link UP Jan 17 00:26:45.800598 systemd-networkd[1164]: lo: Gained carrier Jan 17 00:26:45.802560 systemd-networkd[1164]: Enumeration completed Jan 17 00:26:45.809255 systemd-networkd[1164]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:26:45.809263 systemd-networkd[1164]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:26:45.810893 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:26:45.813007 systemd[1]: Reached target network.target - Network. Jan 17 00:26:45.815537 systemd-networkd[1164]: eth0: Link UP Jan 17 00:26:45.815543 systemd-networkd[1164]: eth0: Gained carrier Jan 17 00:26:45.815559 systemd-networkd[1164]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:26:45.827169 systemd-networkd[1164]: eth0: DHCPv4 address 172.31.25.116/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 00:26:45.991022 ignition[1109]: Ignition 2.19.0 Jan 17 00:26:45.991033 ignition[1109]: Stage: fetch-offline Jan 17 00:26:45.991270 ignition[1109]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:26:45.991279 ignition[1109]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:26:45.994867 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:26:45.991577 ignition[1109]: Ignition finished successfully Jan 17 00:26:45.999246 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:26:46.022777 ignition[1173]: Ignition 2.19.0 Jan 17 00:26:46.022792 ignition[1173]: Stage: fetch Jan 17 00:26:46.023333 ignition[1173]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:26:46.023348 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:26:46.023471 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:26:46.072468 ignition[1173]: PUT result: OK Jan 17 00:26:46.093458 ignition[1173]: parsed url from cmdline: "" Jan 17 00:26:46.093541 ignition[1173]: no config URL provided Jan 17 00:26:46.093562 ignition[1173]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:26:46.093582 ignition[1173]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:26:46.093626 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:26:46.097521 ignition[1173]: PUT result: OK Jan 17 00:26:46.097587 ignition[1173]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 17 00:26:46.099166 ignition[1173]: GET result: OK Jan 17 00:26:46.099283 ignition[1173]: parsing config with SHA512: 8819eefaaf0b6f46606c2192fa408b68cf548fbeffbde13289edd646e6839d564cdf253f61fd2087794fbee8ffb3df8e51f21029de308079ebc0cda59d44ec9e Jan 17 00:26:46.104114 unknown[1173]: fetched base config from "system" Jan 17 00:26:46.104138 unknown[1173]: fetched base config from "system" Jan 17 00:26:46.104716 ignition[1173]: fetch: fetch complete Jan 17 00:26:46.104147 unknown[1173]: fetched user config from "aws" Jan 17 00:26:46.104723 ignition[1173]: fetch: fetch passed Jan 17 00:26:46.104794 ignition[1173]: Ignition finished successfully Jan 17 00:26:46.107720 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:26:46.111282 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:26:46.140550 ignition[1180]: Ignition 2.19.0 Jan 17 00:26:46.140564 ignition[1180]: Stage: kargs Jan 17 00:26:46.141196 ignition[1180]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:26:46.141211 ignition[1180]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:26:46.141336 ignition[1180]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:26:46.142242 ignition[1180]: PUT result: OK Jan 17 00:26:46.145346 ignition[1180]: kargs: kargs passed Jan 17 00:26:46.145432 ignition[1180]: Ignition finished successfully Jan 17 00:26:46.147394 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:26:46.152278 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:26:46.167927 ignition[1186]: Ignition 2.19.0 Jan 17 00:26:46.167941 ignition[1186]: Stage: disks Jan 17 00:26:46.168443 ignition[1186]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:26:46.168457 ignition[1186]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:26:46.168575 ignition[1186]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:26:46.169895 ignition[1186]: PUT result: OK Jan 17 00:26:46.172345 ignition[1186]: disks: disks passed Jan 17 00:26:46.172421 ignition[1186]: Ignition finished successfully Jan 17 00:26:46.174315 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:26:46.175020 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:26:46.175412 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:26:46.175950 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:26:46.176532 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:26:46.177220 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:26:46.182279 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:26:46.213126 systemd-fsck[1194]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 00:26:46.215985 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:26:46.220184 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:26:46.334125 kernel: EXT4-fs (nvme0n1p9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:26:46.334756 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:26:46.335753 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:26:46.343269 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:26:46.345462 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:26:46.346744 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 00:26:46.347478 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:26:46.347504 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:26:46.359967 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:26:46.362604 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1213) Jan 17 00:26:46.369784 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:26:46.369867 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:26:46.369888 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 00:26:46.374307 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:26:46.379074 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 00:26:46.380739 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:26:46.605160 initrd-setup-root[1237]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:26:46.635297 initrd-setup-root[1244]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:26:46.640294 initrd-setup-root[1251]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:26:46.645873 initrd-setup-root[1258]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:26:46.874671 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:26:46.880223 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:26:46.883715 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:26:46.892281 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:26:46.894944 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:26:46.926211 ignition[1326]: INFO : Ignition 2.19.0 Jan 17 00:26:46.928011 ignition[1326]: INFO : Stage: mount Jan 17 00:26:46.928011 ignition[1326]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:26:46.928011 ignition[1326]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:26:46.928011 ignition[1326]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:26:46.931707 ignition[1326]: INFO : PUT result: OK Jan 17 00:26:46.935470 ignition[1326]: INFO : mount: mount passed Jan 17 00:26:46.937348 ignition[1326]: INFO : Ignition finished successfully Jan 17 00:26:46.939006 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:26:46.941629 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:26:46.947216 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:26:46.961546 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:26:46.978069 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1337) Jan 17 00:26:46.981135 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:26:46.981197 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:26:46.983579 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 00:26:46.989117 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 00:26:46.990722 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:26:47.010818 ignition[1353]: INFO : Ignition 2.19.0 Jan 17 00:26:47.010818 ignition[1353]: INFO : Stage: files Jan 17 00:26:47.012077 ignition[1353]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:26:47.012077 ignition[1353]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:26:47.012077 ignition[1353]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:26:47.013266 ignition[1353]: INFO : PUT result: OK Jan 17 00:26:47.014658 ignition[1353]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:26:47.015395 ignition[1353]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:26:47.015395 ignition[1353]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:26:47.041869 ignition[1353]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:26:47.042681 ignition[1353]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:26:47.042681 ignition[1353]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:26:47.042399 unknown[1353]: wrote ssh authorized keys file for user: core Jan 17 00:26:47.046598 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:26:47.047708 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 17 00:26:47.147210 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:26:47.308334 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:26:47.310278 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:26:47.310278 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:26:47.310278 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:26:47.310278 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:26:47.310278 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:26:47.310278 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:26:47.310278 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:26:47.310278 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:26:47.310278 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:26:47.310278 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:26:47.310278 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:26:47.310278 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:26:47.310278 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:26:47.310278 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 17 00:26:47.505224 systemd-networkd[1164]: eth0: Gained IPv6LL Jan 17 00:26:47.763184 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 00:26:48.398590 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:26:48.398590 ignition[1353]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 00:26:48.400724 ignition[1353]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:26:48.400724 ignition[1353]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:26:48.400724 ignition[1353]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 00:26:48.400724 ignition[1353]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:26:48.400724 ignition[1353]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:26:48.400724 ignition[1353]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:26:48.400724 ignition[1353]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:26:48.400724 ignition[1353]: INFO : files: files passed Jan 17 00:26:48.400724 ignition[1353]: INFO : Ignition finished successfully Jan 17 00:26:48.402287 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:26:48.408234 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:26:48.410753 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:26:48.413111 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:26:48.413220 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:26:48.429624 initrd-setup-root-after-ignition[1382]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:26:48.429624 initrd-setup-root-after-ignition[1382]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:26:48.432798 initrd-setup-root-after-ignition[1386]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:26:48.433259 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:26:48.434523 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:26:48.439233 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:26:48.467561 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:26:48.467702 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:26:48.469124 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:26:48.470214 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:26:48.471036 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:26:48.476318 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:26:48.490859 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:26:48.494318 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:26:48.519985 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:26:48.521378 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:26:48.522079 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:26:48.522954 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:26:48.523172 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:26:48.524375 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:26:48.525420 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:26:48.526232 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:26:48.527014 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:26:48.527808 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:26:48.528620 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:26:48.529528 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:26:48.530357 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:26:48.531528 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:26:48.532320 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:26:48.533194 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:26:48.533373 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:26:48.534489 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:26:48.535302 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:26:48.535986 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:26:48.536748 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:26:48.537476 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:26:48.537678 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:26:48.539144 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:26:48.539330 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:26:48.540067 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:26:48.540220 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:26:48.555407 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:26:48.556115 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:26:48.556332 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:26:48.559357 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:26:48.562178 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:26:48.562533 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:26:48.565424 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:26:48.565645 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:26:48.577472 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:26:48.577608 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:26:48.584790 ignition[1406]: INFO : Ignition 2.19.0 Jan 17 00:26:48.584790 ignition[1406]: INFO : Stage: umount Jan 17 00:26:48.586583 ignition[1406]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:26:48.586583 ignition[1406]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:26:48.586583 ignition[1406]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:26:48.588303 ignition[1406]: INFO : PUT result: OK Jan 17 00:26:48.591181 ignition[1406]: INFO : umount: umount passed Jan 17 00:26:48.591181 ignition[1406]: INFO : Ignition finished successfully Jan 17 00:26:48.593977 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:26:48.594740 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:26:48.596683 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:26:48.597630 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:26:48.598908 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:26:48.598979 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:26:48.599908 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:26:48.599974 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:26:48.600940 systemd[1]: Stopped target network.target - Network. Jan 17 00:26:48.601510 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:26:48.601580 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:26:48.602215 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:26:48.603470 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:26:48.603810 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:26:48.604289 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:26:48.607074 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:26:48.607894 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:26:48.607959 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:26:48.609452 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:26:48.609510 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:26:48.610112 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:26:48.610184 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:26:48.610771 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:26:48.610832 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:26:48.611626 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:26:48.612320 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:26:48.614649 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:26:48.615118 systemd-networkd[1164]: eth0: DHCPv6 lease lost Jan 17 00:26:48.616584 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:26:48.616705 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:26:48.618244 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:26:48.618349 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:26:48.620487 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:26:48.620635 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:26:48.621736 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:26:48.621880 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:26:48.625735 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:26:48.625805 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:26:48.632191 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:26:48.632881 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:26:48.632965 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:26:48.633669 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:26:48.633736 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:26:48.634353 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:26:48.634414 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:26:48.635108 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:26:48.635166 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:26:48.635989 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:26:48.651633 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:26:48.651757 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:26:48.653286 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:26:48.653418 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:26:48.654794 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:26:48.654858 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:26:48.656230 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:26:48.656271 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:26:48.657076 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:26:48.657198 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:26:48.658208 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:26:48.658265 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:26:48.658947 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:26:48.658986 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:26:48.667265 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:26:48.667853 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:26:48.667921 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:26:48.671901 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:26:48.671963 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:26:48.675737 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:26:48.675891 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:26:48.677143 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:26:48.686310 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:26:48.694374 systemd[1]: Switching root. Jan 17 00:26:48.733600 systemd-journald[179]: Journal stopped Jan 17 00:26:50.290956 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Jan 17 00:26:50.291033 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:26:50.291059 kernel: SELinux: policy capability open_perms=1 Jan 17 00:26:50.291076 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:26:50.291088 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:26:50.291100 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:26:50.291116 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:26:50.291127 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:26:50.291139 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:26:50.291151 kernel: audit: type=1403 audit(1768609609.173:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:26:50.291165 systemd[1]: Successfully loaded SELinux policy in 75.066ms. Jan 17 00:26:50.291188 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.324ms. Jan 17 00:26:50.291202 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:26:50.291215 systemd[1]: Detected virtualization amazon. Jan 17 00:26:50.291237 systemd[1]: Detected architecture x86-64. Jan 17 00:26:50.291251 systemd[1]: Detected first boot. Jan 17 00:26:50.291264 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:26:50.291276 zram_generator::config[1448]: No configuration found. Jan 17 00:26:50.291290 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:26:50.291305 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 00:26:50.291317 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 00:26:50.291331 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 00:26:50.291343 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:26:50.291359 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:26:50.291372 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:26:50.291385 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:26:50.291397 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:26:50.291411 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:26:50.291423 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:26:50.291435 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:26:50.291448 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:26:50.291465 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:26:50.291479 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:26:50.291492 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:26:50.291505 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:26:50.291518 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:26:50.291530 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:26:50.291542 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:26:50.291556 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 00:26:50.291568 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 00:26:50.291584 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 00:26:50.291596 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:26:50.291609 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:26:50.291626 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:26:50.291639 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:26:50.291651 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:26:50.291664 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:26:50.291677 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:26:50.291692 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:26:50.291706 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:26:50.291719 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:26:50.291731 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:26:50.291744 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:26:50.291757 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:26:50.291769 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:26:50.291782 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:26:50.291796 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:26:50.291811 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:26:50.291825 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:26:50.291838 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:26:50.291851 systemd[1]: Reached target machines.target - Containers. Jan 17 00:26:50.291863 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:26:50.291875 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:26:50.291888 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:26:50.291901 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:26:50.291916 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:26:50.291929 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:26:50.291942 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:26:50.291954 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:26:50.291967 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:26:50.291979 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:26:50.291992 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 00:26:50.292005 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 00:26:50.292018 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 00:26:50.292033 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 00:26:50.292066 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:26:50.292080 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:26:50.292092 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:26:50.292105 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:26:50.292117 kernel: fuse: init (API version 7.39) Jan 17 00:26:50.292129 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:26:50.292142 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 00:26:50.292154 systemd[1]: Stopped verity-setup.service. Jan 17 00:26:50.292169 kernel: loop: module loaded Jan 17 00:26:50.292182 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:26:50.292204 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:26:50.292221 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:26:50.292234 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:26:50.292246 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:26:50.292259 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:26:50.292274 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:26:50.292287 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:26:50.292300 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:26:50.292318 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:26:50.292331 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:26:50.292343 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:26:50.292362 kernel: ACPI: bus type drm_connector registered Jan 17 00:26:50.292378 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:26:50.292391 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:26:50.292404 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:26:50.292420 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:26:50.292433 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:26:50.292448 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:26:50.292461 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:26:50.292492 systemd-journald[1533]: Collecting audit messages is disabled. Jan 17 00:26:50.292518 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:26:50.292531 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:26:50.292545 systemd-journald[1533]: Journal started Jan 17 00:26:50.292576 systemd-journald[1533]: Runtime Journal (/run/log/journal/ec21075a1ca9af61f6f4c184679bbaf0) is 4.7M, max 38.2M, 33.4M free. Jan 17 00:26:49.973945 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:26:49.995615 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 17 00:26:49.996076 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 00:26:50.297074 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:26:50.297702 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:26:50.298514 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:26:50.300583 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:26:50.310484 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:26:50.317868 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:26:50.324484 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:26:50.325071 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:26:50.325209 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:26:50.326664 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:26:50.331289 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:26:50.338389 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:26:50.339641 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:26:50.343247 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:26:50.346159 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:26:50.346728 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:26:50.350211 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:26:50.350832 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:26:50.353589 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:26:50.357212 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:26:50.363243 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:26:50.366422 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:26:50.368242 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:26:50.372004 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:26:50.388147 systemd-journald[1533]: Time spent on flushing to /var/log/journal/ec21075a1ca9af61f6f4c184679bbaf0 is 46.870ms for 982 entries. Jan 17 00:26:50.388147 systemd-journald[1533]: System Journal (/var/log/journal/ec21075a1ca9af61f6f4c184679bbaf0) is 8.0M, max 195.6M, 187.6M free. Jan 17 00:26:50.446462 systemd-journald[1533]: Received client request to flush runtime journal. Jan 17 00:26:50.446519 kernel: loop0: detected capacity change from 0 to 61336 Jan 17 00:26:50.388304 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:26:50.391500 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:26:50.397303 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:26:50.408339 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:26:50.416893 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:26:50.440296 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:26:50.445687 udevadm[1587]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 00:26:50.449577 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:26:50.462487 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:26:50.463401 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:26:50.473342 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:26:50.483095 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:26:50.483253 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:26:50.510071 kernel: loop1: detected capacity change from 0 to 229808 Jan 17 00:26:50.518633 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. Jan 17 00:26:50.519102 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. Jan 17 00:26:50.524728 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:26:50.626072 kernel: loop2: detected capacity change from 0 to 140768 Jan 17 00:26:50.730628 kernel: loop3: detected capacity change from 0 to 142488 Jan 17 00:26:50.830910 kernel: loop4: detected capacity change from 0 to 61336 Jan 17 00:26:50.871082 kernel: loop5: detected capacity change from 0 to 229808 Jan 17 00:26:50.912144 kernel: loop6: detected capacity change from 0 to 140768 Jan 17 00:26:50.951083 kernel: loop7: detected capacity change from 0 to 142488 Jan 17 00:26:50.990633 (sd-merge)[1604]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 17 00:26:50.991335 (sd-merge)[1604]: Merged extensions into '/usr'. Jan 17 00:26:50.997684 systemd[1]: Reloading requested from client PID 1577 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:26:50.997703 systemd[1]: Reloading... Jan 17 00:26:51.125081 zram_generator::config[1630]: No configuration found. Jan 17 00:26:51.323595 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:26:51.392819 systemd[1]: Reloading finished in 392 ms. Jan 17 00:26:51.424768 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:26:51.425915 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:26:51.437273 systemd[1]: Starting ensure-sysext.service... Jan 17 00:26:51.441813 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:26:51.453313 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:26:51.467775 systemd[1]: Reloading requested from client PID 1682 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:26:51.467795 systemd[1]: Reloading... Jan 17 00:26:51.498210 systemd-udevd[1684]: Using default interface naming scheme 'v255'. Jan 17 00:26:51.524095 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:26:51.524719 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:26:51.527509 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:26:51.528990 systemd-tmpfiles[1683]: ACLs are not supported, ignoring. Jan 17 00:26:51.530841 systemd-tmpfiles[1683]: ACLs are not supported, ignoring. Jan 17 00:26:51.552758 systemd-tmpfiles[1683]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:26:51.553784 systemd-tmpfiles[1683]: Skipping /boot Jan 17 00:26:51.582078 zram_generator::config[1712]: No configuration found. Jan 17 00:26:51.594565 systemd-tmpfiles[1683]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:26:51.594755 systemd-tmpfiles[1683]: Skipping /boot Jan 17 00:26:51.772250 (udev-worker)[1728]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:26:51.891346 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 17 00:26:51.908198 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1739) Jan 17 00:26:51.909014 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:26:51.911080 kernel: ACPI: button: Power Button [PWRF] Jan 17 00:26:51.914073 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 17 00:26:51.928089 kernel: ACPI: button: Sleep Button [SLPF] Jan 17 00:26:51.975198 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 17 00:26:51.989704 ldconfig[1572]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:26:52.040119 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Jan 17 00:26:52.104194 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 00:26:52.105514 systemd[1]: Reloading finished in 637 ms. Jan 17 00:26:52.132146 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:26:52.133900 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:26:52.135958 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:26:52.171076 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:26:52.211369 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:26:52.228562 systemd[1]: Finished ensure-sysext.service. Jan 17 00:26:52.234316 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 00:26:52.236683 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:26:52.246359 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:26:52.251958 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:26:52.254290 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:26:52.263322 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:26:52.267511 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:26:52.274363 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:26:52.280924 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:26:52.284698 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:26:52.287879 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:26:52.295960 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:26:52.313882 lvm[1880]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:26:52.312330 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:26:52.323306 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:26:52.345596 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:26:52.347174 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:26:52.355378 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:26:52.365308 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:26:52.366018 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:26:52.368030 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:26:52.369901 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:26:52.370442 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:26:52.372672 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:26:52.373302 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:26:52.378302 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:26:52.379105 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:26:52.381177 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:26:52.381535 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:26:52.391222 augenrules[1905]: No rules Jan 17 00:26:52.390394 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:26:52.393173 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:26:52.408171 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:26:52.416358 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:26:52.417695 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:26:52.418365 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:26:52.422363 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:26:52.431733 lvm[1920]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:26:52.432513 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:26:52.434627 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:26:52.451295 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:26:52.483677 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:26:52.495121 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:26:52.502412 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:26:52.503516 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:26:52.507176 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:26:52.541126 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:26:52.591865 systemd-networkd[1897]: lo: Link UP Jan 17 00:26:52.591877 systemd-networkd[1897]: lo: Gained carrier Jan 17 00:26:52.594000 systemd-networkd[1897]: Enumeration completed Jan 17 00:26:52.594483 systemd-networkd[1897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:26:52.594488 systemd-networkd[1897]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:26:52.595578 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:26:52.604352 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:26:52.606564 systemd-networkd[1897]: eth0: Link UP Jan 17 00:26:52.606846 systemd-networkd[1897]: eth0: Gained carrier Jan 17 00:26:52.606884 systemd-networkd[1897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:26:52.610179 systemd-resolved[1898]: Positive Trust Anchors: Jan 17 00:26:52.610544 systemd-resolved[1898]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:26:52.610687 systemd-resolved[1898]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:26:52.617439 systemd-resolved[1898]: Defaulting to hostname 'linux'. Jan 17 00:26:52.619202 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:26:52.619780 systemd[1]: Reached target network.target - Network. Jan 17 00:26:52.620254 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:26:52.620653 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:26:52.621133 systemd-networkd[1897]: eth0: DHCPv4 address 172.31.25.116/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 00:26:52.621211 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:26:52.621759 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:26:52.622942 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:26:52.623518 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:26:52.623897 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:26:52.624303 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:26:52.624346 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:26:52.624714 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:26:52.625543 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:26:52.627426 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:26:52.637725 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:26:52.638917 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:26:52.639433 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:26:52.639789 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:26:52.640195 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:26:52.640229 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:26:52.641360 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:26:52.645233 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 00:26:52.649128 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:26:52.651406 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:26:52.656251 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:26:52.657261 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:26:52.659704 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:26:52.662405 systemd[1]: Started ntpd.service - Network Time Service. Jan 17 00:26:52.673514 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:26:52.680283 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 17 00:26:52.684263 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:26:52.694530 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:26:52.703395 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:26:52.704510 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:26:52.704993 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:26:52.709979 extend-filesystems[1945]: Found loop4 Jan 17 00:26:52.715179 extend-filesystems[1945]: Found loop5 Jan 17 00:26:52.715179 extend-filesystems[1945]: Found loop6 Jan 17 00:26:52.715179 extend-filesystems[1945]: Found loop7 Jan 17 00:26:52.715179 extend-filesystems[1945]: Found nvme0n1 Jan 17 00:26:52.715179 extend-filesystems[1945]: Found nvme0n1p1 Jan 17 00:26:52.715179 extend-filesystems[1945]: Found nvme0n1p2 Jan 17 00:26:52.715179 extend-filesystems[1945]: Found nvme0n1p3 Jan 17 00:26:52.715179 extend-filesystems[1945]: Found usr Jan 17 00:26:52.715179 extend-filesystems[1945]: Found nvme0n1p4 Jan 17 00:26:52.715179 extend-filesystems[1945]: Found nvme0n1p6 Jan 17 00:26:52.715179 extend-filesystems[1945]: Found nvme0n1p7 Jan 17 00:26:52.715179 extend-filesystems[1945]: Found nvme0n1p9 Jan 17 00:26:52.715179 extend-filesystems[1945]: Checking size of /dev/nvme0n1p9 Jan 17 00:26:52.719414 jq[1944]: false Jan 17 00:26:52.711338 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:26:52.721236 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:26:52.724427 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:26:52.726163 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:26:52.734476 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:26:52.736113 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:26:52.771168 extend-filesystems[1945]: Resized partition /dev/nvme0n1p9 Jan 17 00:26:52.774976 (ntainerd)[1969]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:26:52.779073 extend-filesystems[1979]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:26:52.786666 dbus-daemon[1943]: [system] SELinux support is enabled Jan 17 00:26:52.786858 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:26:52.790984 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:26:52.791015 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:26:52.796071 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 17 00:26:52.794179 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:26:52.794201 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:26:52.799118 update_engine[1956]: I20260117 00:26:52.798230 1956 main.cc:92] Flatcar Update Engine starting Jan 17 00:26:52.807426 jq[1959]: true Jan 17 00:26:52.806817 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:26:52.806990 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:26:52.815719 dbus-daemon[1943]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1897 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 17 00:26:52.816532 jq[1982]: true Jan 17 00:26:52.823507 update_engine[1956]: I20260117 00:26:52.823362 1956 update_check_scheduler.cc:74] Next update check in 3m39s Jan 17 00:26:52.832591 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:26:52.844942 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 17 00:26:52.848213 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:26:52.857455 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 17 00:26:52.862315 systemd-logind[1955]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 00:26:52.862336 systemd-logind[1955]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 17 00:26:52.862353 systemd-logind[1955]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:26:52.862834 systemd-logind[1955]: New seat seat0. Jan 17 00:26:52.863489 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:26:52.868340 tar[1962]: linux-amd64/LICENSE Jan 17 00:26:52.869176 tar[1962]: linux-amd64/helm Jan 17 00:26:52.878404 coreos-metadata[1942]: Jan 17 00:26:52.878 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 00:26:52.888081 coreos-metadata[1942]: Jan 17 00:26:52.886 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 17 00:26:52.888081 coreos-metadata[1942]: Jan 17 00:26:52.887 INFO Fetch successful Jan 17 00:26:52.888081 coreos-metadata[1942]: Jan 17 00:26:52.887 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 17 00:26:52.888210 coreos-metadata[1942]: Jan 17 00:26:52.888 INFO Fetch successful Jan 17 00:26:52.888210 coreos-metadata[1942]: Jan 17 00:26:52.888 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 17 00:26:52.890456 coreos-metadata[1942]: Jan 17 00:26:52.888 INFO Fetch successful Jan 17 00:26:52.890456 coreos-metadata[1942]: Jan 17 00:26:52.888 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 17 00:26:52.890559 coreos-metadata[1942]: Jan 17 00:26:52.890 INFO Fetch successful Jan 17 00:26:52.890559 coreos-metadata[1942]: Jan 17 00:26:52.890 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 17 00:26:52.892888 coreos-metadata[1942]: Jan 17 00:26:52.892 INFO Fetch failed with 404: resource not found Jan 17 00:26:52.892888 coreos-metadata[1942]: Jan 17 00:26:52.892 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 17 00:26:52.893843 coreos-metadata[1942]: Jan 17 00:26:52.893 INFO Fetch successful Jan 17 00:26:52.893894 coreos-metadata[1942]: Jan 17 00:26:52.893 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 17 00:26:52.894960 coreos-metadata[1942]: Jan 17 00:26:52.894 INFO Fetch successful Jan 17 00:26:52.895010 coreos-metadata[1942]: Jan 17 00:26:52.894 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 17 00:26:52.903315 coreos-metadata[1942]: Jan 17 00:26:52.898 INFO Fetch successful Jan 17 00:26:52.903315 coreos-metadata[1942]: Jan 17 00:26:52.898 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 17 00:26:52.903315 coreos-metadata[1942]: Jan 17 00:26:52.899 INFO Fetch successful Jan 17 00:26:52.903315 coreos-metadata[1942]: Jan 17 00:26:52.899 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 17 00:26:52.903315 coreos-metadata[1942]: Jan 17 00:26:52.900 INFO Fetch successful Jan 17 00:26:52.904552 ntpd[1947]: ntpd 4.2.8p17@1.4004-o Fri Jan 16 21:54:12 UTC 2026 (1): Starting Jan 17 00:26:52.905304 ntpd[1947]: 17 Jan 00:26:52 ntpd[1947]: ntpd 4.2.8p17@1.4004-o Fri Jan 16 21:54:12 UTC 2026 (1): Starting Jan 17 00:26:52.905304 ntpd[1947]: 17 Jan 00:26:52 ntpd[1947]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 00:26:52.905304 ntpd[1947]: 17 Jan 00:26:52 ntpd[1947]: ---------------------------------------------------- Jan 17 00:26:52.905304 ntpd[1947]: 17 Jan 00:26:52 ntpd[1947]: ntp-4 is maintained by Network Time Foundation, Jan 17 00:26:52.905304 ntpd[1947]: 17 Jan 00:26:52 ntpd[1947]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 00:26:52.905304 ntpd[1947]: 17 Jan 00:26:52 ntpd[1947]: corporation. Support and training for ntp-4 are Jan 17 00:26:52.905304 ntpd[1947]: 17 Jan 00:26:52 ntpd[1947]: available at https://www.nwtime.org/support Jan 17 00:26:52.905304 ntpd[1947]: 17 Jan 00:26:52 ntpd[1947]: ---------------------------------------------------- Jan 17 00:26:52.904577 ntpd[1947]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 00:26:52.904585 ntpd[1947]: ---------------------------------------------------- Jan 17 00:26:52.904592 ntpd[1947]: ntp-4 is maintained by Network Time Foundation, Jan 17 00:26:52.904599 ntpd[1947]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 00:26:52.904606 ntpd[1947]: corporation. Support and training for ntp-4 are Jan 17 00:26:52.904613 ntpd[1947]: available at https://www.nwtime.org/support Jan 17 00:26:52.904620 ntpd[1947]: ---------------------------------------------------- Jan 17 00:26:52.908245 ntpd[1947]: proto: precision = 0.055 usec (-24) Jan 17 00:26:52.911143 ntpd[1947]: 17 Jan 00:26:52 ntpd[1947]: proto: precision = 0.055 usec (-24) Jan 17 00:26:52.911865 ntpd[1947]: basedate set to 2026-01-04 Jan 17 00:26:52.912639 ntpd[1947]: 17 Jan 00:26:52 ntpd[1947]: basedate set to 2026-01-04 Jan 17 00:26:52.912639 ntpd[1947]: 17 Jan 00:26:52 ntpd[1947]: gps base set to 2026-01-04 (week 2400) Jan 17 00:26:52.911885 ntpd[1947]: gps base set to 2026-01-04 (week 2400) Jan 17 00:26:52.917099 ntpd[1947]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 00:26:52.918182 ntpd[1947]: 17 Jan 00:26:52 ntpd[1947]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 00:26:52.918182 ntpd[1947]: 17 Jan 00:26:52 ntpd[1947]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 00:26:52.918182 ntpd[1947]: 17 Jan 00:26:52 ntpd[1947]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 00:26:52.918182 ntpd[1947]: 17 Jan 00:26:52 ntpd[1947]: Listen normally on 3 eth0 172.31.25.116:123 Jan 17 00:26:52.918182 ntpd[1947]: 17 Jan 00:26:52 ntpd[1947]: Listen normally on 4 lo [::1]:123 Jan 17 00:26:52.918182 ntpd[1947]: 17 Jan 00:26:52 ntpd[1947]: bind(21) AF_INET6 fe80::452:4dff:fe7d:947d%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 00:26:52.918182 ntpd[1947]: 17 Jan 00:26:52 ntpd[1947]: unable to create socket on eth0 (5) for fe80::452:4dff:fe7d:947d%2#123 Jan 17 00:26:52.918182 ntpd[1947]: 17 Jan 00:26:52 ntpd[1947]: failed to init interface for address fe80::452:4dff:fe7d:947d%2 Jan 17 00:26:52.918182 ntpd[1947]: 17 Jan 00:26:52 ntpd[1947]: Listening on routing socket on fd #21 for interface updates Jan 17 00:26:52.917151 ntpd[1947]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 00:26:52.917300 ntpd[1947]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 00:26:52.917327 ntpd[1947]: Listen normally on 3 eth0 172.31.25.116:123 Jan 17 00:26:52.917357 ntpd[1947]: Listen normally on 4 lo [::1]:123 Jan 17 00:26:52.917399 ntpd[1947]: bind(21) AF_INET6 fe80::452:4dff:fe7d:947d%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 00:26:52.917415 ntpd[1947]: unable to create socket on eth0 (5) for fe80::452:4dff:fe7d:947d%2#123 Jan 17 00:26:52.917426 ntpd[1947]: failed to init interface for address fe80::452:4dff:fe7d:947d%2 Jan 17 00:26:52.917449 ntpd[1947]: Listening on routing socket on fd #21 for interface updates Jan 17 00:26:52.922464 ntpd[1947]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:26:52.924684 ntpd[1947]: 17 Jan 00:26:52 ntpd[1947]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:26:52.924684 ntpd[1947]: 17 Jan 00:26:52 ntpd[1947]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:26:52.923596 ntpd[1947]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:26:52.972066 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 17 00:26:52.991853 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 00:26:52.994016 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:26:52.994099 extend-filesystems[1979]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 17 00:26:52.994099 extend-filesystems[1979]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 17 00:26:52.994099 extend-filesystems[1979]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 17 00:26:52.998657 extend-filesystems[1945]: Resized filesystem in /dev/nvme0n1p9 Jan 17 00:26:52.996068 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:26:52.998190 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:26:53.002063 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1721) Jan 17 00:26:53.008583 bash[2020]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:26:53.011043 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:26:53.020130 systemd[1]: Starting sshkeys.service... Jan 17 00:26:53.059395 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 00:26:53.063947 sshd_keygen[2013]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:26:53.068371 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 00:26:53.128614 dbus-daemon[1943]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 17 00:26:53.128759 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 17 00:26:53.132203 dbus-daemon[1943]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1993 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 17 00:26:53.152407 systemd[1]: Starting polkit.service - Authorization Manager... Jan 17 00:26:53.190201 polkitd[2080]: Started polkitd version 121 Jan 17 00:26:53.206878 polkitd[2080]: Loading rules from directory /etc/polkit-1/rules.d Jan 17 00:26:53.210226 polkitd[2080]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 17 00:26:53.216106 polkitd[2080]: Finished loading, compiling and executing 2 rules Jan 17 00:26:53.217551 dbus-daemon[1943]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 17 00:26:53.217823 systemd[1]: Started polkit.service - Authorization Manager. Jan 17 00:26:53.218611 polkitd[2080]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 17 00:26:53.246841 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:26:53.256437 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:26:53.259667 systemd-hostnamed[1993]: Hostname set to (transient) Jan 17 00:26:53.259773 systemd-resolved[1898]: System hostname changed to 'ip-172-31-25-116'. Jan 17 00:26:53.277716 locksmithd[1995]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:26:53.286209 coreos-metadata[2036]: Jan 17 00:26:53.281 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 00:26:53.294303 coreos-metadata[2036]: Jan 17 00:26:53.294 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 17 00:26:53.302073 coreos-metadata[2036]: Jan 17 00:26:53.301 INFO Fetch successful Jan 17 00:26:53.302073 coreos-metadata[2036]: Jan 17 00:26:53.301 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 17 00:26:53.304659 coreos-metadata[2036]: Jan 17 00:26:53.304 INFO Fetch successful Jan 17 00:26:53.308115 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:26:53.308323 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:26:53.314464 unknown[2036]: wrote ssh authorized keys file for user: core Jan 17 00:26:53.317455 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:26:53.356918 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:26:53.361817 containerd[1969]: time="2026-01-17T00:26:53.361744482Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:26:53.368698 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:26:53.370244 update-ssh-keys[2154]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:26:53.372693 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:26:53.373408 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:26:53.375264 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 00:26:53.378103 systemd[1]: Finished sshkeys.service. Jan 17 00:26:53.398980 containerd[1969]: time="2026-01-17T00:26:53.398930815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:26:53.400699 containerd[1969]: time="2026-01-17T00:26:53.400661642Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:26:53.400815 containerd[1969]: time="2026-01-17T00:26:53.400802097Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:26:53.400863 containerd[1969]: time="2026-01-17T00:26:53.400854313Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:26:53.401074 containerd[1969]: time="2026-01-17T00:26:53.401043731Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:26:53.401135 containerd[1969]: time="2026-01-17T00:26:53.401126442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:26:53.401228 containerd[1969]: time="2026-01-17T00:26:53.401214344Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:26:53.401268 containerd[1969]: time="2026-01-17T00:26:53.401260656Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:26:53.401472 containerd[1969]: time="2026-01-17T00:26:53.401457195Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:26:53.401532 containerd[1969]: time="2026-01-17T00:26:53.401522862Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:26:53.401577 containerd[1969]: time="2026-01-17T00:26:53.401567563Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:26:53.401616 containerd[1969]: time="2026-01-17T00:26:53.401607805Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:26:53.401715 containerd[1969]: time="2026-01-17T00:26:53.401704730Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:26:53.401948 containerd[1969]: time="2026-01-17T00:26:53.401934896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:26:53.402144 containerd[1969]: time="2026-01-17T00:26:53.402129195Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:26:53.402196 containerd[1969]: time="2026-01-17T00:26:53.402186865Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:26:53.402320 containerd[1969]: time="2026-01-17T00:26:53.402308564Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:26:53.402414 containerd[1969]: time="2026-01-17T00:26:53.402385937Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:26:53.408075 containerd[1969]: time="2026-01-17T00:26:53.407024488Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:26:53.408075 containerd[1969]: time="2026-01-17T00:26:53.407105453Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:26:53.408075 containerd[1969]: time="2026-01-17T00:26:53.407123763Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:26:53.408075 containerd[1969]: time="2026-01-17T00:26:53.407138791Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:26:53.408075 containerd[1969]: time="2026-01-17T00:26:53.407188691Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:26:53.408075 containerd[1969]: time="2026-01-17T00:26:53.407334464Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:26:53.408075 containerd[1969]: time="2026-01-17T00:26:53.407700454Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:26:53.408075 containerd[1969]: time="2026-01-17T00:26:53.407809943Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:26:53.408075 containerd[1969]: time="2026-01-17T00:26:53.407827492Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:26:53.408075 containerd[1969]: time="2026-01-17T00:26:53.407884230Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:26:53.408075 containerd[1969]: time="2026-01-17T00:26:53.407900308Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:26:53.408075 containerd[1969]: time="2026-01-17T00:26:53.407912819Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:26:53.408075 containerd[1969]: time="2026-01-17T00:26:53.407925878Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:26:53.408075 containerd[1969]: time="2026-01-17T00:26:53.407950171Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:26:53.408442 containerd[1969]: time="2026-01-17T00:26:53.407964022Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:26:53.408442 containerd[1969]: time="2026-01-17T00:26:53.407978310Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:26:53.408442 containerd[1969]: time="2026-01-17T00:26:53.407990285Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:26:53.408442 containerd[1969]: time="2026-01-17T00:26:53.408003912Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:26:53.408442 containerd[1969]: time="2026-01-17T00:26:53.408023689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:26:53.408442 containerd[1969]: time="2026-01-17T00:26:53.408069967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:26:53.408442 containerd[1969]: time="2026-01-17T00:26:53.408083559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:26:53.408442 containerd[1969]: time="2026-01-17T00:26:53.408096485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:26:53.408442 containerd[1969]: time="2026-01-17T00:26:53.408108953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:26:53.408442 containerd[1969]: time="2026-01-17T00:26:53.408121698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:26:53.408442 containerd[1969]: time="2026-01-17T00:26:53.408132891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:26:53.408442 containerd[1969]: time="2026-01-17T00:26:53.408145635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:26:53.408442 containerd[1969]: time="2026-01-17T00:26:53.408159925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:26:53.408442 containerd[1969]: time="2026-01-17T00:26:53.408183415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:26:53.408725 containerd[1969]: time="2026-01-17T00:26:53.408195287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:26:53.408725 containerd[1969]: time="2026-01-17T00:26:53.408206252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:26:53.408725 containerd[1969]: time="2026-01-17T00:26:53.408218089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:26:53.408725 containerd[1969]: time="2026-01-17T00:26:53.408235244Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:26:53.408725 containerd[1969]: time="2026-01-17T00:26:53.408255983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:26:53.408725 containerd[1969]: time="2026-01-17T00:26:53.408267298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:26:53.408725 containerd[1969]: time="2026-01-17T00:26:53.408278062Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:26:53.408725 containerd[1969]: time="2026-01-17T00:26:53.408330337Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:26:53.408725 containerd[1969]: time="2026-01-17T00:26:53.408348480Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:26:53.408725 containerd[1969]: time="2026-01-17T00:26:53.408359139Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:26:53.408725 containerd[1969]: time="2026-01-17T00:26:53.408370515Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:26:53.408725 containerd[1969]: time="2026-01-17T00:26:53.408435046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:26:53.408725 containerd[1969]: time="2026-01-17T00:26:53.408447099Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:26:53.408725 containerd[1969]: time="2026-01-17T00:26:53.408461365Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:26:53.409069 containerd[1969]: time="2026-01-17T00:26:53.408472072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:26:53.409095 containerd[1969]: time="2026-01-17T00:26:53.408748610Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:26:53.409095 containerd[1969]: time="2026-01-17T00:26:53.408807899Z" level=info msg="Connect containerd service" Jan 17 00:26:53.409095 containerd[1969]: time="2026-01-17T00:26:53.408841754Z" level=info msg="using legacy CRI server" Jan 17 00:26:53.409095 containerd[1969]: time="2026-01-17T00:26:53.408848467Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:26:53.412130 containerd[1969]: time="2026-01-17T00:26:53.411168937Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:26:53.412130 containerd[1969]: time="2026-01-17T00:26:53.412024346Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:26:53.413083 containerd[1969]: time="2026-01-17T00:26:53.412675821Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:26:53.413083 containerd[1969]: time="2026-01-17T00:26:53.412905012Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:26:53.413083 containerd[1969]: time="2026-01-17T00:26:53.412954211Z" level=info msg="Start subscribing containerd event" Jan 17 00:26:53.413083 containerd[1969]: time="2026-01-17T00:26:53.412999657Z" level=info msg="Start recovering state" Jan 17 00:26:53.413435 containerd[1969]: time="2026-01-17T00:26:53.413415612Z" level=info msg="Start event monitor" Jan 17 00:26:53.413616 containerd[1969]: time="2026-01-17T00:26:53.413499557Z" level=info msg="Start snapshots syncer" Jan 17 00:26:53.413616 containerd[1969]: time="2026-01-17T00:26:53.413516578Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:26:53.414786 containerd[1969]: time="2026-01-17T00:26:53.413551464Z" level=info msg="Start streaming server" Jan 17 00:26:53.414786 containerd[1969]: time="2026-01-17T00:26:53.413812176Z" level=info msg="containerd successfully booted in 0.053462s" Jan 17 00:26:53.414159 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:26:53.713198 systemd-networkd[1897]: eth0: Gained IPv6LL Jan 17 00:26:53.718146 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:26:53.719891 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:26:53.730323 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 17 00:26:53.742396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:26:53.745499 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:26:53.750347 tar[1962]: linux-amd64/README.md Jan 17 00:26:53.778201 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:26:53.809494 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:26:53.818824 amazon-ssm-agent[2164]: Initializing new seelog logger Jan 17 00:26:53.819292 amazon-ssm-agent[2164]: New Seelog Logger Creation Complete Jan 17 00:26:53.819411 amazon-ssm-agent[2164]: 2026/01/17 00:26:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:26:53.819445 amazon-ssm-agent[2164]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:26:53.819818 amazon-ssm-agent[2164]: 2026/01/17 00:26:53 processing appconfig overrides Jan 17 00:26:53.820303 amazon-ssm-agent[2164]: 2026-01-17 00:26:53 INFO Proxy environment variables: Jan 17 00:26:53.820429 amazon-ssm-agent[2164]: 2026/01/17 00:26:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:26:53.820582 amazon-ssm-agent[2164]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:26:53.820706 amazon-ssm-agent[2164]: 2026/01/17 00:26:53 processing appconfig overrides Jan 17 00:26:53.821090 amazon-ssm-agent[2164]: 2026/01/17 00:26:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:26:53.821090 amazon-ssm-agent[2164]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:26:53.821179 amazon-ssm-agent[2164]: 2026/01/17 00:26:53 processing appconfig overrides Jan 17 00:26:53.824142 amazon-ssm-agent[2164]: 2026/01/17 00:26:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:26:53.824142 amazon-ssm-agent[2164]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:26:53.824256 amazon-ssm-agent[2164]: 2026/01/17 00:26:53 processing appconfig overrides Jan 17 00:26:53.921505 amazon-ssm-agent[2164]: 2026-01-17 00:26:53 INFO https_proxy: Jan 17 00:26:54.019258 amazon-ssm-agent[2164]: 2026-01-17 00:26:53 INFO http_proxy: Jan 17 00:26:54.118173 amazon-ssm-agent[2164]: 2026-01-17 00:26:53 INFO no_proxy: Jan 17 00:26:54.216308 amazon-ssm-agent[2164]: 2026-01-17 00:26:53 INFO Checking if agent identity type OnPrem can be assumed Jan 17 00:26:54.264157 amazon-ssm-agent[2164]: 2026-01-17 00:26:53 INFO Checking if agent identity type EC2 can be assumed Jan 17 00:26:54.264157 amazon-ssm-agent[2164]: 2026-01-17 00:26:53 INFO Agent will take identity from EC2 Jan 17 00:26:54.264157 amazon-ssm-agent[2164]: 2026-01-17 00:26:53 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 00:26:54.264157 amazon-ssm-agent[2164]: 2026-01-17 00:26:53 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 00:26:54.264157 amazon-ssm-agent[2164]: 2026-01-17 00:26:53 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 00:26:54.264157 amazon-ssm-agent[2164]: 2026-01-17 00:26:53 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 17 00:26:54.264157 amazon-ssm-agent[2164]: 2026-01-17 00:26:53 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 17 00:26:54.264501 amazon-ssm-agent[2164]: 2026-01-17 00:26:53 INFO [amazon-ssm-agent] Starting Core Agent Jan 17 00:26:54.264501 amazon-ssm-agent[2164]: 2026-01-17 00:26:53 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 17 00:26:54.264501 amazon-ssm-agent[2164]: 2026-01-17 00:26:53 INFO [Registrar] Starting registrar module Jan 17 00:26:54.264501 amazon-ssm-agent[2164]: 2026-01-17 00:26:53 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 17 00:26:54.264501 amazon-ssm-agent[2164]: 2026-01-17 00:26:54 INFO [EC2Identity] EC2 registration was successful. Jan 17 00:26:54.264501 amazon-ssm-agent[2164]: 2026-01-17 00:26:54 INFO [CredentialRefresher] credentialRefresher has started Jan 17 00:26:54.264501 amazon-ssm-agent[2164]: 2026-01-17 00:26:54 INFO [CredentialRefresher] Starting credentials refresher loop Jan 17 00:26:54.264501 amazon-ssm-agent[2164]: 2026-01-17 00:26:54 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 17 00:26:54.314972 amazon-ssm-agent[2164]: 2026-01-17 00:26:54 INFO [CredentialRefresher] Next credential rotation will be in 32.29999401326667 minutes Jan 17 00:26:55.276579 amazon-ssm-agent[2164]: 2026-01-17 00:26:55 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 17 00:26:55.379113 amazon-ssm-agent[2164]: 2026-01-17 00:26:55 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2187) started Jan 17 00:26:55.478720 amazon-ssm-agent[2164]: 2026-01-17 00:26:55 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 17 00:26:55.735656 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:26:55.737313 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:26:55.738185 systemd[1]: Startup finished in 589ms (kernel) + 6.439s (initrd) + 6.638s (userspace) = 13.668s. Jan 17 00:26:55.744216 (kubelet)[2203]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:26:55.905320 ntpd[1947]: Listen normally on 6 eth0 [fe80::452:4dff:fe7d:947d%2]:123 Jan 17 00:26:55.905887 ntpd[1947]: 17 Jan 00:26:55 ntpd[1947]: Listen normally on 6 eth0 [fe80::452:4dff:fe7d:947d%2]:123 Jan 17 00:26:56.680965 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:26:56.689094 systemd[1]: Started sshd@0-172.31.25.116:22-4.153.228.146:40336.service - OpenSSH per-connection server daemon (4.153.228.146:40336). Jan 17 00:26:56.807268 kubelet[2203]: E0117 00:26:56.807199 2203 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:26:56.810108 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:26:56.810261 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:26:56.810532 systemd[1]: kubelet.service: Consumed 1.078s CPU time. Jan 17 00:26:57.232041 sshd[2213]: Accepted publickey for core from 4.153.228.146 port 40336 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:26:57.234241 sshd[2213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:26:57.243885 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:26:57.256525 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:26:57.260032 systemd-logind[1955]: New session 1 of user core. Jan 17 00:26:57.272520 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:26:57.279419 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:26:57.284606 (systemd)[2219]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:26:57.399758 systemd[2219]: Queued start job for default target default.target. Jan 17 00:26:57.406165 systemd[2219]: Created slice app.slice - User Application Slice. Jan 17 00:26:57.406196 systemd[2219]: Reached target paths.target - Paths. Jan 17 00:26:57.406210 systemd[2219]: Reached target timers.target - Timers. Jan 17 00:26:57.407543 systemd[2219]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:26:57.420645 systemd[2219]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:26:57.420910 systemd[2219]: Reached target sockets.target - Sockets. Jan 17 00:26:57.420933 systemd[2219]: Reached target basic.target - Basic System. Jan 17 00:26:57.420988 systemd[2219]: Reached target default.target - Main User Target. Jan 17 00:26:57.421029 systemd[2219]: Startup finished in 129ms. Jan 17 00:26:57.421495 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:26:57.431306 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:26:57.820237 systemd[1]: Started sshd@1-172.31.25.116:22-4.153.228.146:40350.service - OpenSSH per-connection server daemon (4.153.228.146:40350). Jan 17 00:26:58.381627 sshd[2230]: Accepted publickey for core from 4.153.228.146 port 40350 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:26:58.383099 sshd[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:26:58.389111 systemd-logind[1955]: New session 2 of user core. Jan 17 00:26:58.400506 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:26:58.756394 sshd[2230]: pam_unix(sshd:session): session closed for user core Jan 17 00:26:58.759305 systemd[1]: sshd@1-172.31.25.116:22-4.153.228.146:40350.service: Deactivated successfully. Jan 17 00:26:58.761335 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:26:58.762996 systemd-logind[1955]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:26:58.764392 systemd-logind[1955]: Removed session 2. Jan 17 00:26:58.848039 systemd[1]: Started sshd@2-172.31.25.116:22-4.153.228.146:40362.service - OpenSSH per-connection server daemon (4.153.228.146:40362). Jan 17 00:26:59.371833 sshd[2237]: Accepted publickey for core from 4.153.228.146 port 40362 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:26:59.374767 sshd[2237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:26:59.387320 systemd-logind[1955]: New session 3 of user core. Jan 17 00:26:59.394466 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:26:59.742036 sshd[2237]: pam_unix(sshd:session): session closed for user core Jan 17 00:26:59.753423 systemd[1]: sshd@2-172.31.25.116:22-4.153.228.146:40362.service: Deactivated successfully. Jan 17 00:26:59.755650 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:26:59.757869 systemd-logind[1955]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:26:59.765270 systemd-logind[1955]: Removed session 3. Jan 17 00:26:59.843493 systemd[1]: Started sshd@3-172.31.25.116:22-4.153.228.146:40374.service - OpenSSH per-connection server daemon (4.153.228.146:40374). Jan 17 00:27:01.956489 systemd-resolved[1898]: Clock change detected. Flushing caches. Jan 17 00:27:02.443617 sshd[2244]: Accepted publickey for core from 4.153.228.146 port 40374 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:27:02.450740 sshd[2244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:27:02.462954 systemd-logind[1955]: New session 4 of user core. Jan 17 00:27:02.481011 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:27:02.834171 sshd[2244]: pam_unix(sshd:session): session closed for user core Jan 17 00:27:02.840583 systemd[1]: sshd@3-172.31.25.116:22-4.153.228.146:40374.service: Deactivated successfully. Jan 17 00:27:02.847262 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:27:02.848732 systemd-logind[1955]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:27:02.857183 systemd-logind[1955]: Removed session 4. Jan 17 00:27:02.920278 systemd[1]: Started sshd@4-172.31.25.116:22-4.153.228.146:40388.service - OpenSSH per-connection server daemon (4.153.228.146:40388). Jan 17 00:27:03.436402 sshd[2251]: Accepted publickey for core from 4.153.228.146 port 40388 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:27:03.437867 sshd[2251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:27:03.442674 systemd-logind[1955]: New session 5 of user core. Jan 17 00:27:03.456037 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:27:03.742436 sudo[2254]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:27:03.742735 sudo[2254]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:27:03.759523 sudo[2254]: pam_unix(sudo:session): session closed for user root Jan 17 00:27:03.836156 sshd[2251]: pam_unix(sshd:session): session closed for user core Jan 17 00:27:03.839079 systemd[1]: sshd@4-172.31.25.116:22-4.153.228.146:40388.service: Deactivated successfully. Jan 17 00:27:03.841126 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:27:03.842556 systemd-logind[1955]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:27:03.844355 systemd-logind[1955]: Removed session 5. Jan 17 00:27:03.935977 systemd[1]: Started sshd@5-172.31.25.116:22-4.153.228.146:40394.service - OpenSSH per-connection server daemon (4.153.228.146:40394). Jan 17 00:27:04.466147 sshd[2259]: Accepted publickey for core from 4.153.228.146 port 40394 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:27:04.467899 sshd[2259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:27:04.472852 systemd-logind[1955]: New session 6 of user core. Jan 17 00:27:04.479016 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:27:04.763399 sudo[2263]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:27:04.763886 sudo[2263]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:27:04.768039 sudo[2263]: pam_unix(sudo:session): session closed for user root Jan 17 00:27:04.773800 sudo[2262]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:27:04.774204 sudo[2262]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:27:04.794210 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:27:04.796562 auditctl[2266]: No rules Jan 17 00:27:04.797047 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:27:04.797272 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:27:04.800541 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:27:04.841684 augenrules[2284]: No rules Jan 17 00:27:04.843193 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:27:04.844660 sudo[2262]: pam_unix(sudo:session): session closed for user root Jan 17 00:27:04.929401 sshd[2259]: pam_unix(sshd:session): session closed for user core Jan 17 00:27:04.932800 systemd[1]: sshd@5-172.31.25.116:22-4.153.228.146:40394.service: Deactivated successfully. Jan 17 00:27:04.934580 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:27:04.937253 systemd-logind[1955]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:27:04.939183 systemd-logind[1955]: Removed session 6. Jan 17 00:27:05.030166 systemd[1]: Started sshd@6-172.31.25.116:22-4.153.228.146:35012.service - OpenSSH per-connection server daemon (4.153.228.146:35012). Jan 17 00:27:05.547987 sshd[2292]: Accepted publickey for core from 4.153.228.146 port 35012 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:27:05.549527 sshd[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:27:05.555213 systemd-logind[1955]: New session 7 of user core. Jan 17 00:27:05.562114 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:27:05.839175 sudo[2295]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:27:05.839495 sudo[2295]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:27:06.366353 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:27:06.366460 (dockerd)[2310]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:27:06.870580 dockerd[2310]: time="2026-01-17T00:27:06.870518833Z" level=info msg="Starting up" Jan 17 00:27:07.034296 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4188578992-merged.mount: Deactivated successfully. Jan 17 00:27:07.087929 dockerd[2310]: time="2026-01-17T00:27:07.087666616Z" level=info msg="Loading containers: start." Jan 17 00:27:07.233772 kernel: Initializing XFRM netlink socket Jan 17 00:27:07.261547 (udev-worker)[2332]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:27:07.328951 systemd-networkd[1897]: docker0: Link UP Jan 17 00:27:07.364801 dockerd[2310]: time="2026-01-17T00:27:07.364739308Z" level=info msg="Loading containers: done." Jan 17 00:27:07.393150 dockerd[2310]: time="2026-01-17T00:27:07.393087232Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:27:07.393326 dockerd[2310]: time="2026-01-17T00:27:07.393203136Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:27:07.393326 dockerd[2310]: time="2026-01-17T00:27:07.393308638Z" level=info msg="Daemon has completed initialization" Jan 17 00:27:07.486859 dockerd[2310]: time="2026-01-17T00:27:07.486438493Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:27:07.486548 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:27:08.679730 containerd[1969]: time="2026-01-17T00:27:08.679690973Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 17 00:27:09.111292 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:27:09.119082 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:27:09.269310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3080751887.mount: Deactivated successfully. Jan 17 00:27:09.421808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:27:09.430281 (kubelet)[2465]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:27:09.493948 kubelet[2465]: E0117 00:27:09.493017 2465 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:27:09.499112 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:27:09.499297 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:27:11.248309 containerd[1969]: time="2026-01-17T00:27:11.248243884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:27:11.249649 containerd[1969]: time="2026-01-17T00:27:11.249556147Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114712" Jan 17 00:27:11.251527 containerd[1969]: time="2026-01-17T00:27:11.250843611Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:27:11.254232 containerd[1969]: time="2026-01-17T00:27:11.254194731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:27:11.255315 containerd[1969]: time="2026-01-17T00:27:11.255227472Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 2.575495694s" Jan 17 00:27:11.255315 containerd[1969]: time="2026-01-17T00:27:11.255271003Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 17 00:27:11.255904 containerd[1969]: time="2026-01-17T00:27:11.255874867Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 17 00:27:13.412583 containerd[1969]: time="2026-01-17T00:27:13.411638019Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016781" Jan 17 00:27:13.412583 containerd[1969]: time="2026-01-17T00:27:13.412510516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:27:13.415021 containerd[1969]: time="2026-01-17T00:27:13.414956166Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:27:13.416594 containerd[1969]: time="2026-01-17T00:27:13.416411883Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 2.160495316s" Jan 17 00:27:13.416594 containerd[1969]: time="2026-01-17T00:27:13.416474427Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 17 00:27:13.417305 containerd[1969]: time="2026-01-17T00:27:13.417261208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:27:13.418559 containerd[1969]: time="2026-01-17T00:27:13.418526925Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 17 00:27:14.848310 containerd[1969]: time="2026-01-17T00:27:14.848246907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:27:14.849379 containerd[1969]: time="2026-01-17T00:27:14.849259761Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158102" Jan 17 00:27:14.850762 containerd[1969]: time="2026-01-17T00:27:14.850367467Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:27:14.853167 containerd[1969]: time="2026-01-17T00:27:14.853135670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:27:14.854254 containerd[1969]: time="2026-01-17T00:27:14.854222457Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.435662273s" Jan 17 00:27:14.854336 containerd[1969]: time="2026-01-17T00:27:14.854258988Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 17 00:27:14.854842 containerd[1969]: time="2026-01-17T00:27:14.854820393Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 17 00:27:15.916996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1182108220.mount: Deactivated successfully. Jan 17 00:27:16.537324 containerd[1969]: time="2026-01-17T00:27:16.537258225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:27:16.539597 containerd[1969]: time="2026-01-17T00:27:16.539527799Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Jan 17 00:27:16.541925 containerd[1969]: time="2026-01-17T00:27:16.541862136Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:27:16.545897 containerd[1969]: time="2026-01-17T00:27:16.545342488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:27:16.546135 containerd[1969]: time="2026-01-17T00:27:16.546105095Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.691250997s" Jan 17 00:27:16.546232 containerd[1969]: time="2026-01-17T00:27:16.546213932Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 17 00:27:16.546844 containerd[1969]: time="2026-01-17T00:27:16.546811466Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 17 00:27:17.129405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1290809608.mount: Deactivated successfully. Jan 17 00:27:18.497516 containerd[1969]: time="2026-01-17T00:27:18.497451663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:27:18.499869 containerd[1969]: time="2026-01-17T00:27:18.499625568Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jan 17 00:27:18.502788 containerd[1969]: time="2026-01-17T00:27:18.502163554Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:27:18.506481 containerd[1969]: time="2026-01-17T00:27:18.506417428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:27:18.508336 containerd[1969]: time="2026-01-17T00:27:18.507577146Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.960731725s" Jan 17 00:27:18.508336 containerd[1969]: time="2026-01-17T00:27:18.507629891Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 17 00:27:18.508664 containerd[1969]: time="2026-01-17T00:27:18.508619987Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 00:27:18.966809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3908097889.mount: Deactivated successfully. Jan 17 00:27:18.973295 containerd[1969]: time="2026-01-17T00:27:18.973231933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:27:18.974098 containerd[1969]: time="2026-01-17T00:27:18.973984198Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 17 00:27:18.976404 containerd[1969]: time="2026-01-17T00:27:18.974993402Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:27:18.977317 containerd[1969]: time="2026-01-17T00:27:18.977280217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:27:18.978178 containerd[1969]: time="2026-01-17T00:27:18.978142723Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 469.485825ms" Jan 17 00:27:18.978283 containerd[1969]: time="2026-01-17T00:27:18.978183931Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 17 00:27:18.978928 containerd[1969]: time="2026-01-17T00:27:18.978818522Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 17 00:27:19.432882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount735456332.mount: Deactivated successfully. Jan 17 00:27:19.749741 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:27:19.755987 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:27:20.214078 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:27:20.219186 (kubelet)[2650]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:27:20.300243 kubelet[2650]: E0117 00:27:20.300185 2650 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:27:20.303007 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:27:20.303266 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:27:22.295853 containerd[1969]: time="2026-01-17T00:27:22.295795691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:27:22.304027 containerd[1969]: time="2026-01-17T00:27:22.303956967Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Jan 17 00:27:22.310774 containerd[1969]: time="2026-01-17T00:27:22.310640436Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:27:22.320415 containerd[1969]: time="2026-01-17T00:27:22.320358111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:27:22.322220 containerd[1969]: time="2026-01-17T00:27:22.322039202Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.343181531s" Jan 17 00:27:22.322220 containerd[1969]: time="2026-01-17T00:27:22.322090474Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 17 00:27:25.336581 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 17 00:27:26.404299 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:27:26.418161 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:27:26.459576 systemd[1]: Reloading requested from client PID 2696 ('systemctl') (unit session-7.scope)... Jan 17 00:27:26.459813 systemd[1]: Reloading... Jan 17 00:27:26.594252 zram_generator::config[2736]: No configuration found. Jan 17 00:27:26.752254 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:27:26.838320 systemd[1]: Reloading finished in 377 ms. Jan 17 00:27:26.882427 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 00:27:26.882676 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 00:27:26.882950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:27:26.885220 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:27:27.107495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:27:27.119188 (kubelet)[2797]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:27:27.175370 kubelet[2797]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:27:27.175370 kubelet[2797]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:27:27.175370 kubelet[2797]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:27:27.178307 kubelet[2797]: I0117 00:27:27.178238 2797 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:27:27.691177 kubelet[2797]: I0117 00:27:27.691128 2797 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 17 00:27:27.691177 kubelet[2797]: I0117 00:27:27.691161 2797 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:27:27.691554 kubelet[2797]: I0117 00:27:27.691482 2797 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:27:27.729735 kubelet[2797]: I0117 00:27:27.729698 2797 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:27:27.733519 kubelet[2797]: E0117 00:27:27.733451 2797 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.25.116:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.25.116:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:27:27.758793 kubelet[2797]: E0117 00:27:27.758573 2797 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:27:27.758793 kubelet[2797]: I0117 00:27:27.758623 2797 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:27:27.768708 kubelet[2797]: I0117 00:27:27.768676 2797 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:27:27.771915 kubelet[2797]: I0117 00:27:27.771860 2797 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:27:27.775592 kubelet[2797]: I0117 00:27:27.771910 2797 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-25-116","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:27:27.777008 kubelet[2797]: I0117 00:27:27.776970 2797 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:27:27.777008 kubelet[2797]: I0117 00:27:27.777003 2797 container_manager_linux.go:303] "Creating device plugin manager" Jan 17 00:27:27.778192 kubelet[2797]: I0117 00:27:27.778152 2797 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:27:27.782443 kubelet[2797]: I0117 00:27:27.781892 2797 kubelet.go:480] "Attempting to sync node with API server" Jan 17 00:27:27.782443 kubelet[2797]: I0117 00:27:27.781925 2797 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:27:27.783592 kubelet[2797]: I0117 00:27:27.783569 2797 kubelet.go:386] "Adding apiserver pod source" Jan 17 00:27:27.785837 kubelet[2797]: I0117 00:27:27.785437 2797 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:27:27.787781 kubelet[2797]: E0117 00:27:27.787271 2797 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.25.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-116&limit=500&resourceVersion=0\": dial tcp 172.31.25.116:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:27:27.794458 kubelet[2797]: E0117 00:27:27.794031 2797 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.25.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.25.116:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:27:27.794582 kubelet[2797]: I0117 00:27:27.794551 2797 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:27:27.795793 kubelet[2797]: I0117 00:27:27.795034 2797 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:27:27.796118 kubelet[2797]: W0117 00:27:27.796090 2797 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:27:27.801859 kubelet[2797]: I0117 00:27:27.801821 2797 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:27:27.801973 kubelet[2797]: I0117 00:27:27.801885 2797 server.go:1289] "Started kubelet" Jan 17 00:27:27.805019 kubelet[2797]: I0117 00:27:27.804338 2797 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:27:27.806552 kubelet[2797]: I0117 00:27:27.806481 2797 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:27:27.808767 kubelet[2797]: I0117 00:27:27.806939 2797 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:27:27.809733 kubelet[2797]: I0117 00:27:27.809694 2797 server.go:317] "Adding debug handlers to kubelet server" Jan 17 00:27:27.817991 kubelet[2797]: E0117 00:27:27.812160 2797 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.25.116:6443/api/v1/namespaces/default/events\": dial tcp 172.31.25.116:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-25-116.188b5d1aab8ecd2f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-116,UID:ip-172-31-25-116,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-25-116,},FirstTimestamp:2026-01-17 00:27:27.801847087 +0000 UTC m=+0.670770968,LastTimestamp:2026-01-17 00:27:27.801847087 +0000 UTC m=+0.670770968,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-116,}" Jan 17 00:27:27.820773 kubelet[2797]: I0117 00:27:27.819077 2797 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:27:27.820773 kubelet[2797]: I0117 00:27:27.820257 2797 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:27:27.822298 kubelet[2797]: E0117 00:27:27.822274 2797 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-25-116\" not found" Jan 17 00:27:27.822392 kubelet[2797]: I0117 00:27:27.822385 2797 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:27:27.825038 kubelet[2797]: I0117 00:27:27.825018 2797 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:27:27.826720 kubelet[2797]: I0117 00:27:27.826705 2797 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:27:27.827729 kubelet[2797]: E0117 00:27:27.827221 2797 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.25.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.25.116:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:27:27.830674 kubelet[2797]: I0117 00:27:27.829794 2797 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 17 00:27:27.833381 kubelet[2797]: E0117 00:27:27.832372 2797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-116?timeout=10s\": dial tcp 172.31.25.116:6443: connect: connection refused" interval="200ms" Jan 17 00:27:27.833381 kubelet[2797]: I0117 00:27:27.832564 2797 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:27:27.833381 kubelet[2797]: I0117 00:27:27.832635 2797 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:27:27.835854 kubelet[2797]: E0117 00:27:27.835828 2797 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:27:27.837019 kubelet[2797]: I0117 00:27:27.836997 2797 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:27:27.858237 kubelet[2797]: I0117 00:27:27.857684 2797 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:27:27.858237 kubelet[2797]: I0117 00:27:27.857705 2797 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:27:27.858237 kubelet[2797]: I0117 00:27:27.857728 2797 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:27:27.862670 kubelet[2797]: I0117 00:27:27.862457 2797 policy_none.go:49] "None policy: Start" Jan 17 00:27:27.862670 kubelet[2797]: I0117 00:27:27.862488 2797 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:27:27.862670 kubelet[2797]: I0117 00:27:27.862506 2797 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:27:27.871206 kubelet[2797]: I0117 00:27:27.870766 2797 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 17 00:27:27.871206 kubelet[2797]: I0117 00:27:27.870801 2797 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 17 00:27:27.871206 kubelet[2797]: I0117 00:27:27.870829 2797 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:27:27.871206 kubelet[2797]: I0117 00:27:27.870840 2797 kubelet.go:2436] "Starting kubelet main sync loop" Jan 17 00:27:27.871206 kubelet[2797]: E0117 00:27:27.870890 2797 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:27:27.875543 kubelet[2797]: E0117 00:27:27.875504 2797 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.25.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.25.116:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:27:27.880360 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 00:27:27.893204 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 00:27:27.896903 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 00:27:27.908272 kubelet[2797]: E0117 00:27:27.907954 2797 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:27:27.908384 kubelet[2797]: I0117 00:27:27.908288 2797 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:27:27.908384 kubelet[2797]: I0117 00:27:27.908304 2797 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:27:27.908861 kubelet[2797]: I0117 00:27:27.908588 2797 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:27:27.910126 kubelet[2797]: E0117 00:27:27.910103 2797 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:27:27.910197 kubelet[2797]: E0117 00:27:27.910143 2797 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-25-116\" not found" Jan 17 00:27:27.986943 systemd[1]: Created slice kubepods-burstable-pod7456f0c02b726c630501cde9a587bd4b.slice - libcontainer container kubepods-burstable-pod7456f0c02b726c630501cde9a587bd4b.slice. Jan 17 00:27:28.004808 kubelet[2797]: E0117 00:27:28.004772 2797 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-116\" not found" node="ip-172-31-25-116" Jan 17 00:27:28.008575 systemd[1]: Created slice kubepods-burstable-podd33356b2198d856381d5cd0a9caec7f8.slice - libcontainer container kubepods-burstable-podd33356b2198d856381d5cd0a9caec7f8.slice. Jan 17 00:27:28.011699 kubelet[2797]: I0117 00:27:28.011673 2797 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-116" Jan 17 00:27:28.012319 kubelet[2797]: E0117 00:27:28.012289 2797 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.25.116:6443/api/v1/nodes\": dial tcp 172.31.25.116:6443: connect: connection refused" node="ip-172-31-25-116" Jan 17 00:27:28.013260 kubelet[2797]: E0117 00:27:28.013020 2797 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-116\" not found" node="ip-172-31-25-116" Jan 17 00:27:28.016139 systemd[1]: Created slice kubepods-burstable-pod2239e534f67220d168b3bd33efc6e8f1.slice - libcontainer container kubepods-burstable-pod2239e534f67220d168b3bd33efc6e8f1.slice. Jan 17 00:27:28.018244 kubelet[2797]: E0117 00:27:28.018215 2797 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-116\" not found" node="ip-172-31-25-116" Jan 17 00:27:28.033619 kubelet[2797]: E0117 00:27:28.033574 2797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-116?timeout=10s\": dial tcp 172.31.25.116:6443: connect: connection refused" interval="400ms" Jan 17 00:27:28.129228 kubelet[2797]: I0117 00:27:28.129186 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7456f0c02b726c630501cde9a587bd4b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-116\" (UID: \"7456f0c02b726c630501cde9a587bd4b\") " pod="kube-system/kube-apiserver-ip-172-31-25-116" Jan 17 00:27:28.129343 kubelet[2797]: I0117 00:27:28.129260 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d33356b2198d856381d5cd0a9caec7f8-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-116\" (UID: \"d33356b2198d856381d5cd0a9caec7f8\") " pod="kube-system/kube-controller-manager-ip-172-31-25-116" Jan 17 00:27:28.129343 kubelet[2797]: I0117 00:27:28.129324 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d33356b2198d856381d5cd0a9caec7f8-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-116\" (UID: \"d33356b2198d856381d5cd0a9caec7f8\") " pod="kube-system/kube-controller-manager-ip-172-31-25-116" Jan 17 00:27:28.129427 kubelet[2797]: I0117 00:27:28.129348 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d33356b2198d856381d5cd0a9caec7f8-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-116\" (UID: \"d33356b2198d856381d5cd0a9caec7f8\") " pod="kube-system/kube-controller-manager-ip-172-31-25-116" Jan 17 00:27:28.129427 kubelet[2797]: I0117 00:27:28.129368 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d33356b2198d856381d5cd0a9caec7f8-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-116\" (UID: \"d33356b2198d856381d5cd0a9caec7f8\") " pod="kube-system/kube-controller-manager-ip-172-31-25-116" Jan 17 00:27:28.129427 kubelet[2797]: I0117 00:27:28.129384 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7456f0c02b726c630501cde9a587bd4b-ca-certs\") pod \"kube-apiserver-ip-172-31-25-116\" (UID: \"7456f0c02b726c630501cde9a587bd4b\") " pod="kube-system/kube-apiserver-ip-172-31-25-116" Jan 17 00:27:28.129427 kubelet[2797]: I0117 00:27:28.129405 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7456f0c02b726c630501cde9a587bd4b-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-116\" (UID: \"7456f0c02b726c630501cde9a587bd4b\") " pod="kube-system/kube-apiserver-ip-172-31-25-116" Jan 17 00:27:28.129550 kubelet[2797]: I0117 00:27:28.129422 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d33356b2198d856381d5cd0a9caec7f8-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-116\" (UID: \"d33356b2198d856381d5cd0a9caec7f8\") " pod="kube-system/kube-controller-manager-ip-172-31-25-116" Jan 17 00:27:28.129550 kubelet[2797]: I0117 00:27:28.129457 2797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2239e534f67220d168b3bd33efc6e8f1-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-116\" (UID: \"2239e534f67220d168b3bd33efc6e8f1\") " pod="kube-system/kube-scheduler-ip-172-31-25-116" Jan 17 00:27:28.214336 kubelet[2797]: I0117 00:27:28.214299 2797 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-116" Jan 17 00:27:28.214771 kubelet[2797]: E0117 00:27:28.214654 2797 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.25.116:6443/api/v1/nodes\": dial tcp 172.31.25.116:6443: connect: connection refused" node="ip-172-31-25-116" Jan 17 00:27:28.306849 containerd[1969]: time="2026-01-17T00:27:28.306683860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-116,Uid:7456f0c02b726c630501cde9a587bd4b,Namespace:kube-system,Attempt:0,}" Jan 17 00:27:28.314549 containerd[1969]: time="2026-01-17T00:27:28.314502325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-116,Uid:d33356b2198d856381d5cd0a9caec7f8,Namespace:kube-system,Attempt:0,}" Jan 17 00:27:28.319935 containerd[1969]: time="2026-01-17T00:27:28.319896648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-116,Uid:2239e534f67220d168b3bd33efc6e8f1,Namespace:kube-system,Attempt:0,}" Jan 17 00:27:28.436064 kubelet[2797]: E0117 00:27:28.436023 2797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-116?timeout=10s\": dial tcp 172.31.25.116:6443: connect: connection refused" interval="800ms" Jan 17 00:27:28.617069 kubelet[2797]: I0117 00:27:28.616969 2797 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-116" Jan 17 00:27:28.617302 kubelet[2797]: E0117 00:27:28.617258 2797 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.25.116:6443/api/v1/nodes\": dial tcp 172.31.25.116:6443: connect: connection refused" node="ip-172-31-25-116" Jan 17 00:27:28.822301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount956831515.mount: Deactivated successfully. Jan 17 00:27:28.838080 containerd[1969]: time="2026-01-17T00:27:28.838018273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:27:28.840624 containerd[1969]: time="2026-01-17T00:27:28.840567749Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:27:28.842221 containerd[1969]: time="2026-01-17T00:27:28.842132501Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 00:27:28.844388 containerd[1969]: time="2026-01-17T00:27:28.844339018Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:27:28.846271 containerd[1969]: time="2026-01-17T00:27:28.846230460Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:27:28.848608 containerd[1969]: time="2026-01-17T00:27:28.848550219Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:27:28.850370 containerd[1969]: time="2026-01-17T00:27:28.850304310Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:27:28.852885 containerd[1969]: time="2026-01-17T00:27:28.852836717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:27:28.853688 containerd[1969]: time="2026-01-17T00:27:28.853493007Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 546.713996ms" Jan 17 00:27:28.856783 containerd[1969]: time="2026-01-17T00:27:28.855605602Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 541.031972ms" Jan 17 00:27:28.856783 containerd[1969]: time="2026-01-17T00:27:28.856724168Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 536.75707ms" Jan 17 00:27:28.887551 kubelet[2797]: E0117 00:27:28.887251 2797 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.25.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.25.116:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:27:29.099638 kubelet[2797]: E0117 00:27:29.099594 2797 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.25.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-116&limit=500&resourceVersion=0\": dial tcp 172.31.25.116:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:27:29.121545 containerd[1969]: time="2026-01-17T00:27:29.121249382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:27:29.121545 containerd[1969]: time="2026-01-17T00:27:29.121333018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:27:29.121545 containerd[1969]: time="2026-01-17T00:27:29.121353751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:27:29.121545 containerd[1969]: time="2026-01-17T00:27:29.121448568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:27:29.122457 containerd[1969]: time="2026-01-17T00:27:29.122386735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:27:29.122457 containerd[1969]: time="2026-01-17T00:27:29.122433080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:27:29.122790 containerd[1969]: time="2026-01-17T00:27:29.122623762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:27:29.122872 containerd[1969]: time="2026-01-17T00:27:29.122772332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:27:29.125275 containerd[1969]: time="2026-01-17T00:27:29.125210236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:27:29.125441 containerd[1969]: time="2026-01-17T00:27:29.125409124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:27:29.125570 containerd[1969]: time="2026-01-17T00:27:29.125515740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:27:29.125707 containerd[1969]: time="2026-01-17T00:27:29.125670463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:27:29.153861 systemd[1]: Started cri-containerd-3888bb8da73d343da8ea403ccdb8e49e8bf50437fbf2ea9670859b094484fe13.scope - libcontainer container 3888bb8da73d343da8ea403ccdb8e49e8bf50437fbf2ea9670859b094484fe13. Jan 17 00:27:29.165108 systemd[1]: Started cri-containerd-10e17292ba6966b858c637701869e40d50bab4bc2b22d28f4084b892019c0efb.scope - libcontainer container 10e17292ba6966b858c637701869e40d50bab4bc2b22d28f4084b892019c0efb. Jan 17 00:27:29.167420 systemd[1]: Started cri-containerd-1f808aa0bea3826d91dc28f4565c816804e9dac54ada39dede12bf7306ae62e9.scope - libcontainer container 1f808aa0bea3826d91dc28f4565c816804e9dac54ada39dede12bf7306ae62e9. Jan 17 00:27:29.218423 kubelet[2797]: E0117 00:27:29.218312 2797 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.25.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.25.116:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:27:29.238158 kubelet[2797]: E0117 00:27:29.237498 2797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-116?timeout=10s\": dial tcp 172.31.25.116:6443: connect: connection refused" interval="1.6s" Jan 17 00:27:29.250926 containerd[1969]: time="2026-01-17T00:27:29.250592144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-116,Uid:d33356b2198d856381d5cd0a9caec7f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f808aa0bea3826d91dc28f4565c816804e9dac54ada39dede12bf7306ae62e9\"" Jan 17 00:27:29.265476 containerd[1969]: time="2026-01-17T00:27:29.263872897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-116,Uid:7456f0c02b726c630501cde9a587bd4b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3888bb8da73d343da8ea403ccdb8e49e8bf50437fbf2ea9670859b094484fe13\"" Jan 17 00:27:29.271880 containerd[1969]: time="2026-01-17T00:27:29.271828268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-116,Uid:2239e534f67220d168b3bd33efc6e8f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"10e17292ba6966b858c637701869e40d50bab4bc2b22d28f4084b892019c0efb\"" Jan 17 00:27:29.275420 containerd[1969]: time="2026-01-17T00:27:29.275168726Z" level=info msg="CreateContainer within sandbox \"3888bb8da73d343da8ea403ccdb8e49e8bf50437fbf2ea9670859b094484fe13\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:27:29.278636 containerd[1969]: time="2026-01-17T00:27:29.278580480Z" level=info msg="CreateContainer within sandbox \"1f808aa0bea3826d91dc28f4565c816804e9dac54ada39dede12bf7306ae62e9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:27:29.280511 kubelet[2797]: E0117 00:27:29.280451 2797 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.25.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.25.116:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:27:29.296326 containerd[1969]: time="2026-01-17T00:27:29.296280670Z" level=info msg="CreateContainer within sandbox \"10e17292ba6966b858c637701869e40d50bab4bc2b22d28f4084b892019c0efb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:27:29.319956 containerd[1969]: time="2026-01-17T00:27:29.319910355Z" level=info msg="CreateContainer within sandbox \"3888bb8da73d343da8ea403ccdb8e49e8bf50437fbf2ea9670859b094484fe13\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0460ff09c9953160f858a60246c30f39113100f2dcc0641f0e7abb4a7e6f1f5b\"" Jan 17 00:27:29.320777 containerd[1969]: time="2026-01-17T00:27:29.320682655Z" level=info msg="StartContainer for \"0460ff09c9953160f858a60246c30f39113100f2dcc0641f0e7abb4a7e6f1f5b\"" Jan 17 00:27:29.344404 containerd[1969]: time="2026-01-17T00:27:29.344350390Z" level=info msg="CreateContainer within sandbox \"1f808aa0bea3826d91dc28f4565c816804e9dac54ada39dede12bf7306ae62e9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"798bde592c5e02a935432d62396849e65a4dc0c1d86dd49810e07d1fabfcbf31\"" Jan 17 00:27:29.345503 containerd[1969]: time="2026-01-17T00:27:29.345449717Z" level=info msg="StartContainer for \"798bde592c5e02a935432d62396849e65a4dc0c1d86dd49810e07d1fabfcbf31\"" Jan 17 00:27:29.352579 containerd[1969]: time="2026-01-17T00:27:29.351117592Z" level=info msg="CreateContainer within sandbox \"10e17292ba6966b858c637701869e40d50bab4bc2b22d28f4084b892019c0efb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"44cad2d84e803d91cbe6309bd19d2c662bd43b28fb81b5368f5f85e783d833df\"" Jan 17 00:27:29.353191 containerd[1969]: time="2026-01-17T00:27:29.353090860Z" level=info msg="StartContainer for \"44cad2d84e803d91cbe6309bd19d2c662bd43b28fb81b5368f5f85e783d833df\"" Jan 17 00:27:29.359896 systemd[1]: Started cri-containerd-0460ff09c9953160f858a60246c30f39113100f2dcc0641f0e7abb4a7e6f1f5b.scope - libcontainer container 0460ff09c9953160f858a60246c30f39113100f2dcc0641f0e7abb4a7e6f1f5b. Jan 17 00:27:29.408977 systemd[1]: Started cri-containerd-44cad2d84e803d91cbe6309bd19d2c662bd43b28fb81b5368f5f85e783d833df.scope - libcontainer container 44cad2d84e803d91cbe6309bd19d2c662bd43b28fb81b5368f5f85e783d833df. Jan 17 00:27:29.412604 systemd[1]: Started cri-containerd-798bde592c5e02a935432d62396849e65a4dc0c1d86dd49810e07d1fabfcbf31.scope - libcontainer container 798bde592c5e02a935432d62396849e65a4dc0c1d86dd49810e07d1fabfcbf31. Jan 17 00:27:29.420914 kubelet[2797]: I0117 00:27:29.420426 2797 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-116" Jan 17 00:27:29.420914 kubelet[2797]: E0117 00:27:29.420844 2797 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.25.116:6443/api/v1/nodes\": dial tcp 172.31.25.116:6443: connect: connection refused" node="ip-172-31-25-116" Jan 17 00:27:29.455560 containerd[1969]: time="2026-01-17T00:27:29.455442376Z" level=info msg="StartContainer for \"0460ff09c9953160f858a60246c30f39113100f2dcc0641f0e7abb4a7e6f1f5b\" returns successfully" Jan 17 00:27:29.493987 containerd[1969]: time="2026-01-17T00:27:29.493858631Z" level=info msg="StartContainer for \"798bde592c5e02a935432d62396849e65a4dc0c1d86dd49810e07d1fabfcbf31\" returns successfully" Jan 17 00:27:29.524345 containerd[1969]: time="2026-01-17T00:27:29.524300355Z" level=info msg="StartContainer for \"44cad2d84e803d91cbe6309bd19d2c662bd43b28fb81b5368f5f85e783d833df\" returns successfully" Jan 17 00:27:29.887783 kubelet[2797]: E0117 00:27:29.887206 2797 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-116\" not found" node="ip-172-31-25-116" Jan 17 00:27:29.890328 kubelet[2797]: E0117 00:27:29.890298 2797 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-116\" not found" node="ip-172-31-25-116" Jan 17 00:27:29.895633 kubelet[2797]: E0117 00:27:29.895601 2797 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-116\" not found" node="ip-172-31-25-116" Jan 17 00:27:29.924654 kubelet[2797]: E0117 00:27:29.924480 2797 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.25.116:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.25.116:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:27:30.897320 kubelet[2797]: E0117 00:27:30.897282 2797 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-116\" not found" node="ip-172-31-25-116" Jan 17 00:27:30.897775 kubelet[2797]: E0117 00:27:30.897690 2797 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-116\" not found" node="ip-172-31-25-116" Jan 17 00:27:31.024561 kubelet[2797]: I0117 00:27:31.024530 2797 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-116" Jan 17 00:27:31.901285 kubelet[2797]: E0117 00:27:31.900352 2797 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-25-116\" not found" node="ip-172-31-25-116" Jan 17 00:27:33.077442 kubelet[2797]: E0117 00:27:33.077393 2797 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-25-116\" not found" node="ip-172-31-25-116" Jan 17 00:27:33.126332 kubelet[2797]: I0117 00:27:33.126101 2797 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-25-116" Jan 17 00:27:33.128539 kubelet[2797]: I0117 00:27:33.128517 2797 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-116" Jan 17 00:27:33.188401 kubelet[2797]: E0117 00:27:33.187890 2797 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-25-116\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-25-116" Jan 17 00:27:33.188401 kubelet[2797]: I0117 00:27:33.188329 2797 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-116" Jan 17 00:27:33.192914 kubelet[2797]: E0117 00:27:33.192570 2797 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-25-116\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-25-116" Jan 17 00:27:33.192914 kubelet[2797]: I0117 00:27:33.192604 2797 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-25-116" Jan 17 00:27:33.200195 kubelet[2797]: E0117 00:27:33.200122 2797 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-25-116\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-25-116" Jan 17 00:27:33.210244 kubelet[2797]: I0117 00:27:33.209536 2797 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-25-116" Jan 17 00:27:33.215346 kubelet[2797]: E0117 00:27:33.215307 2797 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-25-116\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-25-116" Jan 17 00:27:33.791933 kubelet[2797]: I0117 00:27:33.791880 2797 apiserver.go:52] "Watching apiserver" Jan 17 00:27:33.827869 kubelet[2797]: I0117 00:27:33.827797 2797 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:27:35.092576 systemd[1]: Reloading requested from client PID 3082 ('systemctl') (unit session-7.scope)... Jan 17 00:27:35.092595 systemd[1]: Reloading... Jan 17 00:27:35.202777 zram_generator::config[3120]: No configuration found. Jan 17 00:27:35.343783 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:27:35.446548 systemd[1]: Reloading finished in 353 ms. Jan 17 00:27:35.497724 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:27:35.507704 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:27:35.507966 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:27:35.508024 systemd[1]: kubelet.service: Consumed 1.101s CPU time, 128.7M memory peak, 0B memory swap peak. Jan 17 00:27:35.518675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:27:35.949742 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:27:35.961578 (kubelet)[3181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:27:36.059961 kubelet[3181]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:27:36.059961 kubelet[3181]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:27:36.059961 kubelet[3181]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:27:36.061517 kubelet[3181]: I0117 00:27:36.061242 3181 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:27:36.080680 kubelet[3181]: I0117 00:27:36.080633 3181 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 17 00:27:36.080680 kubelet[3181]: I0117 00:27:36.080663 3181 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:27:36.081052 kubelet[3181]: I0117 00:27:36.081026 3181 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:27:36.083575 kubelet[3181]: I0117 00:27:36.083536 3181 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 17 00:27:36.101772 kubelet[3181]: I0117 00:27:36.097656 3181 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:27:36.157853 kubelet[3181]: E0117 00:27:36.157690 3181 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:27:36.157853 kubelet[3181]: I0117 00:27:36.157729 3181 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:27:36.169983 kubelet[3181]: I0117 00:27:36.168502 3181 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:27:36.169983 kubelet[3181]: I0117 00:27:36.168866 3181 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:27:36.169983 kubelet[3181]: I0117 00:27:36.168910 3181 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-25-116","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:27:36.169983 kubelet[3181]: I0117 00:27:36.169252 3181 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:27:36.170333 kubelet[3181]: I0117 00:27:36.169269 3181 container_manager_linux.go:303] "Creating device plugin manager" Jan 17 00:27:36.170333 kubelet[3181]: I0117 00:27:36.169333 3181 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:27:36.170333 kubelet[3181]: I0117 00:27:36.169517 3181 kubelet.go:480] "Attempting to sync node with API server" Jan 17 00:27:36.170333 kubelet[3181]: I0117 00:27:36.169536 3181 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:27:36.170333 kubelet[3181]: I0117 00:27:36.169565 3181 kubelet.go:386] "Adding apiserver pod source" Jan 17 00:27:36.170333 kubelet[3181]: I0117 00:27:36.169585 3181 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:27:36.191594 kubelet[3181]: I0117 00:27:36.190663 3181 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:27:36.191594 kubelet[3181]: I0117 00:27:36.191597 3181 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:27:36.207847 kubelet[3181]: I0117 00:27:36.205954 3181 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:27:36.207847 kubelet[3181]: I0117 00:27:36.206734 3181 server.go:1289] "Started kubelet" Jan 17 00:27:36.210715 kubelet[3181]: I0117 00:27:36.210666 3181 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:27:36.212946 kubelet[3181]: I0117 00:27:36.212880 3181 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:27:36.215608 kubelet[3181]: I0117 00:27:36.215582 3181 server.go:317] "Adding debug handlers to kubelet server" Jan 17 00:27:36.221380 kubelet[3181]: I0117 00:27:36.221355 3181 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:27:36.233099 kubelet[3181]: I0117 00:27:36.232399 3181 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:27:36.233439 kubelet[3181]: I0117 00:27:36.233417 3181 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:27:36.234781 kubelet[3181]: I0117 00:27:36.233541 3181 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:27:36.235636 kubelet[3181]: I0117 00:27:36.235609 3181 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:27:36.236411 kubelet[3181]: I0117 00:27:36.235796 3181 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:27:36.243997 kubelet[3181]: I0117 00:27:36.243976 3181 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:27:36.244578 kubelet[3181]: I0117 00:27:36.244546 3181 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:27:36.255474 kubelet[3181]: E0117 00:27:36.250824 3181 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:27:36.256686 kubelet[3181]: I0117 00:27:36.256634 3181 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:27:36.263948 kubelet[3181]: I0117 00:27:36.263903 3181 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 17 00:27:36.266419 kubelet[3181]: I0117 00:27:36.266377 3181 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 17 00:27:36.266419 kubelet[3181]: I0117 00:27:36.266410 3181 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 17 00:27:36.266596 kubelet[3181]: I0117 00:27:36.266436 3181 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:27:36.266596 kubelet[3181]: I0117 00:27:36.266445 3181 kubelet.go:2436] "Starting kubelet main sync loop" Jan 17 00:27:36.266596 kubelet[3181]: E0117 00:27:36.266497 3181 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:27:36.336976 kubelet[3181]: I0117 00:27:36.336900 3181 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:27:36.337237 kubelet[3181]: I0117 00:27:36.337222 3181 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:27:36.337330 kubelet[3181]: I0117 00:27:36.337322 3181 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:27:36.337510 kubelet[3181]: I0117 00:27:36.337499 3181 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:27:36.337572 kubelet[3181]: I0117 00:27:36.337556 3181 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:27:36.337610 kubelet[3181]: I0117 00:27:36.337605 3181 policy_none.go:49] "None policy: Start" Jan 17 00:27:36.337663 kubelet[3181]: I0117 00:27:36.337657 3181 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:27:36.337703 kubelet[3181]: I0117 00:27:36.337698 3181 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:27:36.337861 kubelet[3181]: I0117 00:27:36.337852 3181 state_mem.go:75] "Updated machine memory state" Jan 17 00:27:36.344471 kubelet[3181]: E0117 00:27:36.344446 3181 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:27:36.344879 kubelet[3181]: I0117 00:27:36.344817 3181 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:27:36.345203 kubelet[3181]: I0117 00:27:36.345160 3181 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:27:36.345978 kubelet[3181]: I0117 00:27:36.345503 3181 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:27:36.349421 kubelet[3181]: E0117 00:27:36.348689 3181 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:27:36.368371 kubelet[3181]: I0117 00:27:36.368331 3181 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-116" Jan 17 00:27:36.370163 kubelet[3181]: I0117 00:27:36.370133 3181 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-116" Jan 17 00:27:36.370393 kubelet[3181]: I0117 00:27:36.370350 3181 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-25-116" Jan 17 00:27:36.460339 kubelet[3181]: I0117 00:27:36.460105 3181 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-25-116" Jan 17 00:27:36.474253 kubelet[3181]: I0117 00:27:36.472583 3181 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-25-116" Jan 17 00:27:36.474253 kubelet[3181]: I0117 00:27:36.472682 3181 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-25-116" Jan 17 00:27:36.537188 kubelet[3181]: I0117 00:27:36.536936 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d33356b2198d856381d5cd0a9caec7f8-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-116\" (UID: \"d33356b2198d856381d5cd0a9caec7f8\") " pod="kube-system/kube-controller-manager-ip-172-31-25-116" Jan 17 00:27:36.537188 kubelet[3181]: I0117 00:27:36.536974 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d33356b2198d856381d5cd0a9caec7f8-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-116\" (UID: \"d33356b2198d856381d5cd0a9caec7f8\") " pod="kube-system/kube-controller-manager-ip-172-31-25-116" Jan 17 00:27:36.537188 kubelet[3181]: I0117 00:27:36.536997 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d33356b2198d856381d5cd0a9caec7f8-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-116\" (UID: \"d33356b2198d856381d5cd0a9caec7f8\") " pod="kube-system/kube-controller-manager-ip-172-31-25-116" Jan 17 00:27:36.537188 kubelet[3181]: I0117 00:27:36.537016 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2239e534f67220d168b3bd33efc6e8f1-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-116\" (UID: \"2239e534f67220d168b3bd33efc6e8f1\") " pod="kube-system/kube-scheduler-ip-172-31-25-116" Jan 17 00:27:36.537188 kubelet[3181]: I0117 00:27:36.537053 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d33356b2198d856381d5cd0a9caec7f8-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-116\" (UID: \"d33356b2198d856381d5cd0a9caec7f8\") " pod="kube-system/kube-controller-manager-ip-172-31-25-116" Jan 17 00:27:36.537431 kubelet[3181]: I0117 00:27:36.537073 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7456f0c02b726c630501cde9a587bd4b-ca-certs\") pod \"kube-apiserver-ip-172-31-25-116\" (UID: \"7456f0c02b726c630501cde9a587bd4b\") " pod="kube-system/kube-apiserver-ip-172-31-25-116" Jan 17 00:27:36.537431 kubelet[3181]: I0117 00:27:36.537088 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7456f0c02b726c630501cde9a587bd4b-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-116\" (UID: \"7456f0c02b726c630501cde9a587bd4b\") " pod="kube-system/kube-apiserver-ip-172-31-25-116" Jan 17 00:27:36.537431 kubelet[3181]: I0117 00:27:36.537103 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7456f0c02b726c630501cde9a587bd4b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-116\" (UID: \"7456f0c02b726c630501cde9a587bd4b\") " pod="kube-system/kube-apiserver-ip-172-31-25-116" Jan 17 00:27:36.537431 kubelet[3181]: I0117 00:27:36.537119 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d33356b2198d856381d5cd0a9caec7f8-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-116\" (UID: \"d33356b2198d856381d5cd0a9caec7f8\") " pod="kube-system/kube-controller-manager-ip-172-31-25-116" Jan 17 00:27:37.175711 kubelet[3181]: I0117 00:27:37.175647 3181 apiserver.go:52] "Watching apiserver" Jan 17 00:27:37.236423 kubelet[3181]: I0117 00:27:37.236362 3181 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:27:37.307426 kubelet[3181]: I0117 00:27:37.307221 3181 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-25-116" Jan 17 00:27:37.308032 kubelet[3181]: I0117 00:27:37.307996 3181 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-25-116" Jan 17 00:27:37.318444 kubelet[3181]: E0117 00:27:37.318212 3181 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-25-116\" already exists" pod="kube-system/kube-scheduler-ip-172-31-25-116" Jan 17 00:27:37.319578 kubelet[3181]: E0117 00:27:37.319387 3181 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-25-116\" already exists" pod="kube-system/kube-apiserver-ip-172-31-25-116" Jan 17 00:27:37.339431 kubelet[3181]: I0117 00:27:37.339222 3181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-25-116" podStartSLOduration=1.339191371 podStartE2EDuration="1.339191371s" podCreationTimestamp="2026-01-17 00:27:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:27:37.339028598 +0000 UTC m=+1.360919464" watchObservedRunningTime="2026-01-17 00:27:37.339191371 +0000 UTC m=+1.361082215" Jan 17 00:27:37.363875 kubelet[3181]: I0117 00:27:37.363657 3181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-25-116" podStartSLOduration=1.363640147 podStartE2EDuration="1.363640147s" podCreationTimestamp="2026-01-17 00:27:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:27:37.350490748 +0000 UTC m=+1.372381612" watchObservedRunningTime="2026-01-17 00:27:37.363640147 +0000 UTC m=+1.385531014" Jan 17 00:27:40.124993 update_engine[1956]: I20260117 00:27:40.124923 1956 update_attempter.cc:509] Updating boot flags... Jan 17 00:27:40.179832 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3243) Jan 17 00:27:40.388783 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3246) Jan 17 00:27:40.736789 kubelet[3181]: I0117 00:27:40.736731 3181 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:27:40.737203 containerd[1969]: time="2026-01-17T00:27:40.737079812Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:27:40.737425 kubelet[3181]: I0117 00:27:40.737219 3181 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:27:41.622951 kubelet[3181]: I0117 00:27:41.622900 3181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-25-116" podStartSLOduration=5.622883129 podStartE2EDuration="5.622883129s" podCreationTimestamp="2026-01-17 00:27:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:27:37.363765376 +0000 UTC m=+1.385656221" watchObservedRunningTime="2026-01-17 00:27:41.622883129 +0000 UTC m=+5.644773973" Jan 17 00:27:41.645909 systemd[1]: Created slice kubepods-besteffort-podc2eb4424_24b5_425e_b32c_bac580d8915d.slice - libcontainer container kubepods-besteffort-podc2eb4424_24b5_425e_b32c_bac580d8915d.slice. Jan 17 00:27:41.677141 kubelet[3181]: I0117 00:27:41.677099 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2eb4424-24b5-425e-b32c-bac580d8915d-xtables-lock\") pod \"kube-proxy-gvbkp\" (UID: \"c2eb4424-24b5-425e-b32c-bac580d8915d\") " pod="kube-system/kube-proxy-gvbkp" Jan 17 00:27:41.677306 kubelet[3181]: I0117 00:27:41.677149 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2eb4424-24b5-425e-b32c-bac580d8915d-lib-modules\") pod \"kube-proxy-gvbkp\" (UID: \"c2eb4424-24b5-425e-b32c-bac580d8915d\") " pod="kube-system/kube-proxy-gvbkp" Jan 17 00:27:41.677306 kubelet[3181]: I0117 00:27:41.677176 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dd2t\" (UniqueName: \"kubernetes.io/projected/c2eb4424-24b5-425e-b32c-bac580d8915d-kube-api-access-4dd2t\") pod \"kube-proxy-gvbkp\" (UID: \"c2eb4424-24b5-425e-b32c-bac580d8915d\") " pod="kube-system/kube-proxy-gvbkp" Jan 17 00:27:41.677306 kubelet[3181]: I0117 00:27:41.677208 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c2eb4424-24b5-425e-b32c-bac580d8915d-kube-proxy\") pod \"kube-proxy-gvbkp\" (UID: \"c2eb4424-24b5-425e-b32c-bac580d8915d\") " pod="kube-system/kube-proxy-gvbkp" Jan 17 00:27:41.952863 systemd[1]: Created slice kubepods-besteffort-podb189d04e_c012_4cb4_a30f_abd65ad43060.slice - libcontainer container kubepods-besteffort-podb189d04e_c012_4cb4_a30f_abd65ad43060.slice. Jan 17 00:27:41.964975 containerd[1969]: time="2026-01-17T00:27:41.964935280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gvbkp,Uid:c2eb4424-24b5-425e-b32c-bac580d8915d,Namespace:kube-system,Attempt:0,}" Jan 17 00:27:41.980794 kubelet[3181]: I0117 00:27:41.979815 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r84db\" (UniqueName: \"kubernetes.io/projected/b189d04e-c012-4cb4-a30f-abd65ad43060-kube-api-access-r84db\") pod \"tigera-operator-7dcd859c48-8bcgz\" (UID: \"b189d04e-c012-4cb4-a30f-abd65ad43060\") " pod="tigera-operator/tigera-operator-7dcd859c48-8bcgz" Jan 17 00:27:41.980794 kubelet[3181]: I0117 00:27:41.979858 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b189d04e-c012-4cb4-a30f-abd65ad43060-var-lib-calico\") pod \"tigera-operator-7dcd859c48-8bcgz\" (UID: \"b189d04e-c012-4cb4-a30f-abd65ad43060\") " pod="tigera-operator/tigera-operator-7dcd859c48-8bcgz" Jan 17 00:27:41.993962 containerd[1969]: time="2026-01-17T00:27:41.993537249Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:27:41.993962 containerd[1969]: time="2026-01-17T00:27:41.993606949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:27:41.993962 containerd[1969]: time="2026-01-17T00:27:41.993621344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:27:41.993962 containerd[1969]: time="2026-01-17T00:27:41.993710375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:27:42.028001 systemd[1]: Started cri-containerd-241486f60ec89b0f067818ca6449244f207c6bc7e64374d135405f320fb9697b.scope - libcontainer container 241486f60ec89b0f067818ca6449244f207c6bc7e64374d135405f320fb9697b. Jan 17 00:27:42.052685 containerd[1969]: time="2026-01-17T00:27:42.052652018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gvbkp,Uid:c2eb4424-24b5-425e-b32c-bac580d8915d,Namespace:kube-system,Attempt:0,} returns sandbox id \"241486f60ec89b0f067818ca6449244f207c6bc7e64374d135405f320fb9697b\"" Jan 17 00:27:42.059950 containerd[1969]: time="2026-01-17T00:27:42.059742878Z" level=info msg="CreateContainer within sandbox \"241486f60ec89b0f067818ca6449244f207c6bc7e64374d135405f320fb9697b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:27:42.083570 containerd[1969]: time="2026-01-17T00:27:42.083532711Z" level=info msg="CreateContainer within sandbox \"241486f60ec89b0f067818ca6449244f207c6bc7e64374d135405f320fb9697b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c72d2512aadc85d7694a7f030ae01a0a30871f59ba12fdf6a5bb9d402c14cecc\"" Jan 17 00:27:42.088126 containerd[1969]: time="2026-01-17T00:27:42.086986684Z" level=info msg="StartContainer for \"c72d2512aadc85d7694a7f030ae01a0a30871f59ba12fdf6a5bb9d402c14cecc\"" Jan 17 00:27:42.121973 systemd[1]: Started cri-containerd-c72d2512aadc85d7694a7f030ae01a0a30871f59ba12fdf6a5bb9d402c14cecc.scope - libcontainer container c72d2512aadc85d7694a7f030ae01a0a30871f59ba12fdf6a5bb9d402c14cecc. Jan 17 00:27:42.152838 containerd[1969]: time="2026-01-17T00:27:42.152721137Z" level=info msg="StartContainer for \"c72d2512aadc85d7694a7f030ae01a0a30871f59ba12fdf6a5bb9d402c14cecc\" returns successfully" Jan 17 00:27:42.257013 containerd[1969]: time="2026-01-17T00:27:42.256903471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-8bcgz,Uid:b189d04e-c012-4cb4-a30f-abd65ad43060,Namespace:tigera-operator,Attempt:0,}" Jan 17 00:27:42.283459 containerd[1969]: time="2026-01-17T00:27:42.282634584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:27:42.283459 containerd[1969]: time="2026-01-17T00:27:42.283411352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:27:42.283459 containerd[1969]: time="2026-01-17T00:27:42.283426880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:27:42.283738 containerd[1969]: time="2026-01-17T00:27:42.283523386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:27:42.305959 systemd[1]: Started cri-containerd-f492c753e99e0584be23d6124a15d5c874e795f82ce2e9a82cb0cebbc07744fb.scope - libcontainer container f492c753e99e0584be23d6124a15d5c874e795f82ce2e9a82cb0cebbc07744fb. Jan 17 00:27:42.371853 containerd[1969]: time="2026-01-17T00:27:42.371624408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-8bcgz,Uid:b189d04e-c012-4cb4-a30f-abd65ad43060,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f492c753e99e0584be23d6124a15d5c874e795f82ce2e9a82cb0cebbc07744fb\"" Jan 17 00:27:42.374521 containerd[1969]: time="2026-01-17T00:27:42.374418631Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 17 00:27:42.793340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4249669822.mount: Deactivated successfully. Jan 17 00:27:43.599968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3778593731.mount: Deactivated successfully. Jan 17 00:27:44.401344 containerd[1969]: time="2026-01-17T00:27:44.401291628Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:27:44.402336 containerd[1969]: time="2026-01-17T00:27:44.402191594Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 17 00:27:44.404489 containerd[1969]: time="2026-01-17T00:27:44.403414993Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:27:44.406249 containerd[1969]: time="2026-01-17T00:27:44.405495713Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:27:44.406249 containerd[1969]: time="2026-01-17T00:27:44.406130899Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.031669741s" Jan 17 00:27:44.406249 containerd[1969]: time="2026-01-17T00:27:44.406158957Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 17 00:27:44.415511 containerd[1969]: time="2026-01-17T00:27:44.414860134Z" level=info msg="CreateContainer within sandbox \"f492c753e99e0584be23d6124a15d5c874e795f82ce2e9a82cb0cebbc07744fb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 00:27:44.428955 containerd[1969]: time="2026-01-17T00:27:44.428921790Z" level=info msg="CreateContainer within sandbox \"f492c753e99e0584be23d6124a15d5c874e795f82ce2e9a82cb0cebbc07744fb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4a12d337d46550598af81ce281f2c351f7d10364b7471d170103133b1801a635\"" Jan 17 00:27:44.429565 containerd[1969]: time="2026-01-17T00:27:44.429516111Z" level=info msg="StartContainer for \"4a12d337d46550598af81ce281f2c351f7d10364b7471d170103133b1801a635\"" Jan 17 00:27:44.459172 systemd[1]: run-containerd-runc-k8s.io-4a12d337d46550598af81ce281f2c351f7d10364b7471d170103133b1801a635-runc.EqkxNL.mount: Deactivated successfully. Jan 17 00:27:44.464964 systemd[1]: Started cri-containerd-4a12d337d46550598af81ce281f2c351f7d10364b7471d170103133b1801a635.scope - libcontainer container 4a12d337d46550598af81ce281f2c351f7d10364b7471d170103133b1801a635. Jan 17 00:27:44.493535 containerd[1969]: time="2026-01-17T00:27:44.493493153Z" level=info msg="StartContainer for \"4a12d337d46550598af81ce281f2c351f7d10364b7471d170103133b1801a635\" returns successfully" Jan 17 00:27:45.368117 kubelet[3181]: I0117 00:27:45.368046 3181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gvbkp" podStartSLOduration=4.3680322799999995 podStartE2EDuration="4.36803228s" podCreationTimestamp="2026-01-17 00:27:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:27:42.36108328 +0000 UTC m=+6.382974145" watchObservedRunningTime="2026-01-17 00:27:45.36803228 +0000 UTC m=+9.389923143" Jan 17 00:27:47.809484 kubelet[3181]: I0117 00:27:47.809282 3181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-8bcgz" podStartSLOduration=4.775515013 podStartE2EDuration="6.809259625s" podCreationTimestamp="2026-01-17 00:27:41 +0000 UTC" firstStartedPulling="2026-01-17 00:27:42.373365913 +0000 UTC m=+6.395256755" lastFinishedPulling="2026-01-17 00:27:44.407110511 +0000 UTC m=+8.429001367" observedRunningTime="2026-01-17 00:27:45.367792118 +0000 UTC m=+9.389682981" watchObservedRunningTime="2026-01-17 00:27:47.809259625 +0000 UTC m=+11.831150490" Jan 17 00:27:51.685466 sudo[2295]: pam_unix(sudo:session): session closed for user root Jan 17 00:27:51.770540 sshd[2292]: pam_unix(sshd:session): session closed for user core Jan 17 00:27:51.775215 systemd[1]: sshd@6-172.31.25.116:22-4.153.228.146:35012.service: Deactivated successfully. Jan 17 00:27:51.779502 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:27:51.780276 systemd[1]: session-7.scope: Consumed 6.310s CPU time, 144.9M memory peak, 0B memory swap peak. Jan 17 00:27:51.782819 systemd-logind[1955]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:27:51.785810 systemd-logind[1955]: Removed session 7. Jan 17 00:27:57.542820 systemd[1]: Created slice kubepods-besteffort-podd7b5f350_e6bc_49a7_ae37_a57140148d61.slice - libcontainer container kubepods-besteffort-podd7b5f350_e6bc_49a7_ae37_a57140148d61.slice. Jan 17 00:27:57.587062 kubelet[3181]: I0117 00:27:57.586998 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfhlp\" (UniqueName: \"kubernetes.io/projected/d7b5f350-e6bc-49a7-ae37-a57140148d61-kube-api-access-tfhlp\") pod \"calico-typha-68786dcb58-br42g\" (UID: \"d7b5f350-e6bc-49a7-ae37-a57140148d61\") " pod="calico-system/calico-typha-68786dcb58-br42g" Jan 17 00:27:57.588772 kubelet[3181]: I0117 00:27:57.587110 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d7b5f350-e6bc-49a7-ae37-a57140148d61-typha-certs\") pod \"calico-typha-68786dcb58-br42g\" (UID: \"d7b5f350-e6bc-49a7-ae37-a57140148d61\") " pod="calico-system/calico-typha-68786dcb58-br42g" Jan 17 00:27:57.588772 kubelet[3181]: I0117 00:27:57.587137 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7b5f350-e6bc-49a7-ae37-a57140148d61-tigera-ca-bundle\") pod \"calico-typha-68786dcb58-br42g\" (UID: \"d7b5f350-e6bc-49a7-ae37-a57140148d61\") " pod="calico-system/calico-typha-68786dcb58-br42g" Jan 17 00:27:57.663399 systemd[1]: Created slice kubepods-besteffort-pod48b8a5bb_65d2_4d0d_9906_9a967381ae35.slice - libcontainer container kubepods-besteffort-pod48b8a5bb_65d2_4d0d_9906_9a967381ae35.slice. Jan 17 00:27:57.688985 kubelet[3181]: I0117 00:27:57.688941 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48b8a5bb-65d2-4d0d-9906-9a967381ae35-tigera-ca-bundle\") pod \"calico-node-khjqh\" (UID: \"48b8a5bb-65d2-4d0d-9906-9a967381ae35\") " pod="calico-system/calico-node-khjqh" Jan 17 00:27:57.688985 kubelet[3181]: I0117 00:27:57.688993 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/48b8a5bb-65d2-4d0d-9906-9a967381ae35-var-run-calico\") pod \"calico-node-khjqh\" (UID: \"48b8a5bb-65d2-4d0d-9906-9a967381ae35\") " pod="calico-system/calico-node-khjqh" Jan 17 00:27:57.689208 kubelet[3181]: I0117 00:27:57.689017 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/48b8a5bb-65d2-4d0d-9906-9a967381ae35-lib-modules\") pod \"calico-node-khjqh\" (UID: \"48b8a5bb-65d2-4d0d-9906-9a967381ae35\") " pod="calico-system/calico-node-khjqh" Jan 17 00:27:57.689208 kubelet[3181]: I0117 00:27:57.689038 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/48b8a5bb-65d2-4d0d-9906-9a967381ae35-node-certs\") pod \"calico-node-khjqh\" (UID: \"48b8a5bb-65d2-4d0d-9906-9a967381ae35\") " pod="calico-system/calico-node-khjqh" Jan 17 00:27:57.689208 kubelet[3181]: I0117 00:27:57.689075 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/48b8a5bb-65d2-4d0d-9906-9a967381ae35-policysync\") pod \"calico-node-khjqh\" (UID: \"48b8a5bb-65d2-4d0d-9906-9a967381ae35\") " pod="calico-system/calico-node-khjqh" Jan 17 00:27:57.689208 kubelet[3181]: I0117 00:27:57.689104 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/48b8a5bb-65d2-4d0d-9906-9a967381ae35-xtables-lock\") pod \"calico-node-khjqh\" (UID: \"48b8a5bb-65d2-4d0d-9906-9a967381ae35\") " pod="calico-system/calico-node-khjqh" Jan 17 00:27:57.689208 kubelet[3181]: I0117 00:27:57.689127 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdrz8\" (UniqueName: \"kubernetes.io/projected/48b8a5bb-65d2-4d0d-9906-9a967381ae35-kube-api-access-xdrz8\") pod \"calico-node-khjqh\" (UID: \"48b8a5bb-65d2-4d0d-9906-9a967381ae35\") " pod="calico-system/calico-node-khjqh" Jan 17 00:27:57.689422 kubelet[3181]: I0117 00:27:57.689152 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/48b8a5bb-65d2-4d0d-9906-9a967381ae35-cni-bin-dir\") pod \"calico-node-khjqh\" (UID: \"48b8a5bb-65d2-4d0d-9906-9a967381ae35\") " pod="calico-system/calico-node-khjqh" Jan 17 00:27:57.689422 kubelet[3181]: I0117 00:27:57.689174 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/48b8a5bb-65d2-4d0d-9906-9a967381ae35-cni-net-dir\") pod \"calico-node-khjqh\" (UID: \"48b8a5bb-65d2-4d0d-9906-9a967381ae35\") " pod="calico-system/calico-node-khjqh" Jan 17 00:27:57.689422 kubelet[3181]: I0117 00:27:57.689197 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/48b8a5bb-65d2-4d0d-9906-9a967381ae35-flexvol-driver-host\") pod \"calico-node-khjqh\" (UID: \"48b8a5bb-65d2-4d0d-9906-9a967381ae35\") " pod="calico-system/calico-node-khjqh" Jan 17 00:27:57.689422 kubelet[3181]: I0117 00:27:57.689244 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/48b8a5bb-65d2-4d0d-9906-9a967381ae35-cni-log-dir\") pod \"calico-node-khjqh\" (UID: \"48b8a5bb-65d2-4d0d-9906-9a967381ae35\") " pod="calico-system/calico-node-khjqh" Jan 17 00:27:57.689422 kubelet[3181]: I0117 00:27:57.689284 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/48b8a5bb-65d2-4d0d-9906-9a967381ae35-var-lib-calico\") pod \"calico-node-khjqh\" (UID: \"48b8a5bb-65d2-4d0d-9906-9a967381ae35\") " pod="calico-system/calico-node-khjqh" Jan 17 00:27:57.773828 kubelet[3181]: E0117 00:27:57.773604 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5p9mr" podUID="c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9" Jan 17 00:27:57.790334 kubelet[3181]: I0117 00:27:57.790294 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9-registration-dir\") pod \"csi-node-driver-5p9mr\" (UID: \"c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9\") " pod="calico-system/csi-node-driver-5p9mr" Jan 17 00:27:57.790334 kubelet[3181]: I0117 00:27:57.790342 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sg4t\" (UniqueName: \"kubernetes.io/projected/c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9-kube-api-access-6sg4t\") pod \"csi-node-driver-5p9mr\" (UID: \"c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9\") " pod="calico-system/csi-node-driver-5p9mr" Jan 17 00:27:57.790555 kubelet[3181]: I0117 00:27:57.790393 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9-kubelet-dir\") pod \"csi-node-driver-5p9mr\" (UID: \"c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9\") " pod="calico-system/csi-node-driver-5p9mr" Jan 17 00:27:57.790555 kubelet[3181]: I0117 00:27:57.790419 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9-varrun\") pod \"csi-node-driver-5p9mr\" (UID: \"c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9\") " pod="calico-system/csi-node-driver-5p9mr" Jan 17 00:27:57.792329 kubelet[3181]: I0117 00:27:57.792287 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9-socket-dir\") pod \"csi-node-driver-5p9mr\" (UID: \"c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9\") " pod="calico-system/csi-node-driver-5p9mr" Jan 17 00:27:57.800350 kubelet[3181]: E0117 00:27:57.800216 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.800350 kubelet[3181]: W0117 00:27:57.800257 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.800350 kubelet[3181]: E0117 00:27:57.800286 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.803994 kubelet[3181]: E0117 00:27:57.801294 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.803994 kubelet[3181]: W0117 00:27:57.801311 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.803994 kubelet[3181]: E0117 00:27:57.801332 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.803994 kubelet[3181]: E0117 00:27:57.802329 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.803994 kubelet[3181]: W0117 00:27:57.802345 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.803994 kubelet[3181]: E0117 00:27:57.802368 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.803994 kubelet[3181]: E0117 00:27:57.802647 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.803994 kubelet[3181]: W0117 00:27:57.802658 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.803994 kubelet[3181]: E0117 00:27:57.802672 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.803994 kubelet[3181]: E0117 00:27:57.802971 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.812003 kubelet[3181]: W0117 00:27:57.802981 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.812003 kubelet[3181]: E0117 00:27:57.802994 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.812003 kubelet[3181]: E0117 00:27:57.803289 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.812003 kubelet[3181]: W0117 00:27:57.803300 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.812003 kubelet[3181]: E0117 00:27:57.803314 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.812003 kubelet[3181]: E0117 00:27:57.803629 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.812003 kubelet[3181]: W0117 00:27:57.803640 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.812003 kubelet[3181]: E0117 00:27:57.803654 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.812003 kubelet[3181]: E0117 00:27:57.804078 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.812003 kubelet[3181]: W0117 00:27:57.804092 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.812429 kubelet[3181]: E0117 00:27:57.804107 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.812429 kubelet[3181]: E0117 00:27:57.804403 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.812429 kubelet[3181]: W0117 00:27:57.804414 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.812429 kubelet[3181]: E0117 00:27:57.804448 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.812429 kubelet[3181]: E0117 00:27:57.804896 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.812429 kubelet[3181]: W0117 00:27:57.804908 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.812429 kubelet[3181]: E0117 00:27:57.804922 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.812429 kubelet[3181]: E0117 00:27:57.805243 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.812429 kubelet[3181]: W0117 00:27:57.805254 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.812429 kubelet[3181]: E0117 00:27:57.805286 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.812827 kubelet[3181]: E0117 00:27:57.805624 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.812827 kubelet[3181]: W0117 00:27:57.805655 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.812827 kubelet[3181]: E0117 00:27:57.805670 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.812827 kubelet[3181]: E0117 00:27:57.806519 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.812827 kubelet[3181]: W0117 00:27:57.806532 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.812827 kubelet[3181]: E0117 00:27:57.806566 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.812827 kubelet[3181]: E0117 00:27:57.806929 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.812827 kubelet[3181]: W0117 00:27:57.806941 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.812827 kubelet[3181]: E0117 00:27:57.806980 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.831375 kubelet[3181]: E0117 00:27:57.831275 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.831375 kubelet[3181]: W0117 00:27:57.831300 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.831375 kubelet[3181]: E0117 00:27:57.831325 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.854208 containerd[1969]: time="2026-01-17T00:27:57.853170606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68786dcb58-br42g,Uid:d7b5f350-e6bc-49a7-ae37-a57140148d61,Namespace:calico-system,Attempt:0,}" Jan 17 00:27:57.894390 kubelet[3181]: E0117 00:27:57.893845 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.894390 kubelet[3181]: W0117 00:27:57.893872 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.894390 kubelet[3181]: E0117 00:27:57.893897 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.894390 kubelet[3181]: E0117 00:27:57.894215 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.894390 kubelet[3181]: W0117 00:27:57.894226 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.894390 kubelet[3181]: E0117 00:27:57.894240 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.895395 kubelet[3181]: E0117 00:27:57.894988 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.895395 kubelet[3181]: W0117 00:27:57.895008 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.895395 kubelet[3181]: E0117 00:27:57.895023 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.895907 kubelet[3181]: E0117 00:27:57.895633 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.895907 kubelet[3181]: W0117 00:27:57.895647 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.895907 kubelet[3181]: E0117 00:27:57.895661 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.896454 kubelet[3181]: E0117 00:27:57.896292 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.896454 kubelet[3181]: W0117 00:27:57.896307 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.896454 kubelet[3181]: E0117 00:27:57.896320 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.897051 kubelet[3181]: E0117 00:27:57.896791 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.897051 kubelet[3181]: W0117 00:27:57.896811 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.897051 kubelet[3181]: E0117 00:27:57.896824 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.897347 kubelet[3181]: E0117 00:27:57.897261 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.897347 kubelet[3181]: W0117 00:27:57.897274 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.897347 kubelet[3181]: E0117 00:27:57.897287 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.897851 kubelet[3181]: E0117 00:27:57.897727 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.897851 kubelet[3181]: W0117 00:27:57.897740 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.897851 kubelet[3181]: E0117 00:27:57.897780 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.898420 kubelet[3181]: E0117 00:27:57.898182 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.898420 kubelet[3181]: W0117 00:27:57.898196 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.898420 kubelet[3181]: E0117 00:27:57.898209 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.898731 kubelet[3181]: E0117 00:27:57.898632 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.898731 kubelet[3181]: W0117 00:27:57.898645 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.898731 kubelet[3181]: E0117 00:27:57.898668 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.899316 kubelet[3181]: E0117 00:27:57.899128 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.899316 kubelet[3181]: W0117 00:27:57.899141 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.899316 kubelet[3181]: E0117 00:27:57.899165 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.899794 kubelet[3181]: E0117 00:27:57.899649 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.899794 kubelet[3181]: W0117 00:27:57.899664 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.899794 kubelet[3181]: E0117 00:27:57.899679 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.900377 kubelet[3181]: E0117 00:27:57.900198 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.900377 kubelet[3181]: W0117 00:27:57.900213 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.900377 kubelet[3181]: E0117 00:27:57.900226 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.901057 kubelet[3181]: E0117 00:27:57.900681 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.901057 kubelet[3181]: W0117 00:27:57.900693 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.901057 kubelet[3181]: E0117 00:27:57.900707 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.901411 kubelet[3181]: E0117 00:27:57.901398 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.901611 kubelet[3181]: W0117 00:27:57.901490 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.901611 kubelet[3181]: E0117 00:27:57.901509 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.901951 kubelet[3181]: E0117 00:27:57.901914 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.901951 kubelet[3181]: W0117 00:27:57.901926 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.901951 kubelet[3181]: E0117 00:27:57.901938 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.902444 kubelet[3181]: E0117 00:27:57.902333 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.902444 kubelet[3181]: W0117 00:27:57.902348 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.902444 kubelet[3181]: E0117 00:27:57.902361 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.903039 kubelet[3181]: E0117 00:27:57.902829 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.903039 kubelet[3181]: W0117 00:27:57.902844 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.903039 kubelet[3181]: E0117 00:27:57.902857 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.903453 kubelet[3181]: E0117 00:27:57.903336 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.903453 kubelet[3181]: W0117 00:27:57.903348 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.903453 kubelet[3181]: E0117 00:27:57.903361 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.903940 kubelet[3181]: E0117 00:27:57.903811 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.903940 kubelet[3181]: W0117 00:27:57.903824 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.903940 kubelet[3181]: E0117 00:27:57.903838 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.904455 kubelet[3181]: E0117 00:27:57.904256 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.904455 kubelet[3181]: W0117 00:27:57.904270 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.904455 kubelet[3181]: E0117 00:27:57.904293 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.905846 kubelet[3181]: E0117 00:27:57.905501 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.905846 kubelet[3181]: W0117 00:27:57.905514 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.905846 kubelet[3181]: E0117 00:27:57.905527 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.906154 kubelet[3181]: E0117 00:27:57.906115 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.906154 kubelet[3181]: W0117 00:27:57.906128 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.906154 kubelet[3181]: E0117 00:27:57.906140 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.907094 kubelet[3181]: E0117 00:27:57.907081 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.907298 kubelet[3181]: W0117 00:27:57.907191 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.907298 kubelet[3181]: E0117 00:27:57.907224 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.908063 kubelet[3181]: E0117 00:27:57.907991 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.908063 kubelet[3181]: W0117 00:27:57.908006 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.908063 kubelet[3181]: E0117 00:27:57.908028 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.923580 containerd[1969]: time="2026-01-17T00:27:57.923047177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:27:57.923580 containerd[1969]: time="2026-01-17T00:27:57.923240774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:27:57.923580 containerd[1969]: time="2026-01-17T00:27:57.923258432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:27:57.924138 kubelet[3181]: E0117 00:27:57.923704 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:27:57.924138 kubelet[3181]: W0117 00:27:57.923885 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:27:57.924138 kubelet[3181]: E0117 00:27:57.923909 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:27:57.924463 containerd[1969]: time="2026-01-17T00:27:57.924256185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:27:57.967890 containerd[1969]: time="2026-01-17T00:27:57.967511140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-khjqh,Uid:48b8a5bb-65d2-4d0d-9906-9a967381ae35,Namespace:calico-system,Attempt:0,}" Jan 17 00:27:57.984108 systemd[1]: Started cri-containerd-39ef95c51e0150bc440168752d8bdf8d4d4409e72636e51855be4fb4cab574a0.scope - libcontainer container 39ef95c51e0150bc440168752d8bdf8d4d4409e72636e51855be4fb4cab574a0. Jan 17 00:27:58.026815 containerd[1969]: time="2026-01-17T00:27:58.026670026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:27:58.027106 containerd[1969]: time="2026-01-17T00:27:58.026999843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:27:58.027475 containerd[1969]: time="2026-01-17T00:27:58.027093119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:27:58.027726 containerd[1969]: time="2026-01-17T00:27:58.027580271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:27:58.058043 systemd[1]: Started cri-containerd-1384345fd643d200a8380e1fe6ca6ce0855e25a7460b1cbcf9eb40a442c83fd3.scope - libcontainer container 1384345fd643d200a8380e1fe6ca6ce0855e25a7460b1cbcf9eb40a442c83fd3. Jan 17 00:27:58.076715 containerd[1969]: time="2026-01-17T00:27:58.076639799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-68786dcb58-br42g,Uid:d7b5f350-e6bc-49a7-ae37-a57140148d61,Namespace:calico-system,Attempt:0,} returns sandbox id \"39ef95c51e0150bc440168752d8bdf8d4d4409e72636e51855be4fb4cab574a0\"" Jan 17 00:27:58.083903 containerd[1969]: time="2026-01-17T00:27:58.083853934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 17 00:27:58.107649 containerd[1969]: time="2026-01-17T00:27:58.107491498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-khjqh,Uid:48b8a5bb-65d2-4d0d-9906-9a967381ae35,Namespace:calico-system,Attempt:0,} returns sandbox id \"1384345fd643d200a8380e1fe6ca6ce0855e25a7460b1cbcf9eb40a442c83fd3\"" Jan 17 00:27:59.272853 kubelet[3181]: E0117 00:27:59.271400 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5p9mr" podUID="c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9" Jan 17 00:27:59.390014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2915990923.mount: Deactivated successfully. Jan 17 00:28:00.269643 containerd[1969]: time="2026-01-17T00:28:00.269588253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:00.271248 containerd[1969]: time="2026-01-17T00:28:00.271106771Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 17 00:28:00.274087 containerd[1969]: time="2026-01-17T00:28:00.273162647Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:00.276263 containerd[1969]: time="2026-01-17T00:28:00.276222699Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:00.282092 containerd[1969]: time="2026-01-17T00:28:00.282039242Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.198112896s" Jan 17 00:28:00.282092 containerd[1969]: time="2026-01-17T00:28:00.282097069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 17 00:28:00.296068 containerd[1969]: time="2026-01-17T00:28:00.296027121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 17 00:28:00.333041 containerd[1969]: time="2026-01-17T00:28:00.332989337Z" level=info msg="CreateContainer within sandbox \"39ef95c51e0150bc440168752d8bdf8d4d4409e72636e51855be4fb4cab574a0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 00:28:00.354436 containerd[1969]: time="2026-01-17T00:28:00.354375535Z" level=info msg="CreateContainer within sandbox \"39ef95c51e0150bc440168752d8bdf8d4d4409e72636e51855be4fb4cab574a0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a052c596910cf08d62778597facf18ee3f3076f3439d760c33834ab6e38fe324\"" Jan 17 00:28:00.359476 containerd[1969]: time="2026-01-17T00:28:00.359117591Z" level=info msg="StartContainer for \"a052c596910cf08d62778597facf18ee3f3076f3439d760c33834ab6e38fe324\"" Jan 17 00:28:00.426103 systemd[1]: Started cri-containerd-a052c596910cf08d62778597facf18ee3f3076f3439d760c33834ab6e38fe324.scope - libcontainer container a052c596910cf08d62778597facf18ee3f3076f3439d760c33834ab6e38fe324. Jan 17 00:28:00.484430 containerd[1969]: time="2026-01-17T00:28:00.484385438Z" level=info msg="StartContainer for \"a052c596910cf08d62778597facf18ee3f3076f3439d760c33834ab6e38fe324\" returns successfully" Jan 17 00:28:01.287571 kubelet[3181]: E0117 00:28:01.286512 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5p9mr" podUID="c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9" Jan 17 00:28:01.545588 kubelet[3181]: I0117 00:28:01.534245 3181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-68786dcb58-br42g" podStartSLOduration=2.320890009 podStartE2EDuration="4.531207287s" podCreationTimestamp="2026-01-17 00:27:57 +0000 UTC" firstStartedPulling="2026-01-17 00:27:58.078241695 +0000 UTC m=+22.100132600" lastFinishedPulling="2026-01-17 00:28:00.288559034 +0000 UTC m=+24.310449878" observedRunningTime="2026-01-17 00:28:01.514560457 +0000 UTC m=+25.536451405" watchObservedRunningTime="2026-01-17 00:28:01.531207287 +0000 UTC m=+25.553098152" Jan 17 00:28:01.547236 kubelet[3181]: E0117 00:28:01.547169 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.547236 kubelet[3181]: W0117 00:28:01.547200 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.547745 kubelet[3181]: E0117 00:28:01.547480 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.548281 kubelet[3181]: E0117 00:28:01.548264 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.548450 kubelet[3181]: W0117 00:28:01.548345 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.548450 kubelet[3181]: E0117 00:28:01.548364 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.552221 kubelet[3181]: E0117 00:28:01.552194 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.552606 kubelet[3181]: W0117 00:28:01.552365 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.552606 kubelet[3181]: E0117 00:28:01.552393 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.558112 kubelet[3181]: E0117 00:28:01.557889 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.558112 kubelet[3181]: W0117 00:28:01.557923 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.558112 kubelet[3181]: E0117 00:28:01.557950 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.562777 kubelet[3181]: E0117 00:28:01.558463 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.562777 kubelet[3181]: W0117 00:28:01.558480 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.562777 kubelet[3181]: E0117 00:28:01.558500 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.563739 kubelet[3181]: E0117 00:28:01.563530 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.563739 kubelet[3181]: W0117 00:28:01.563560 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.563739 kubelet[3181]: E0117 00:28:01.563587 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.564050 kubelet[3181]: E0117 00:28:01.564036 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.564272 kubelet[3181]: W0117 00:28:01.564104 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.564272 kubelet[3181]: E0117 00:28:01.564125 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.564973 kubelet[3181]: E0117 00:28:01.564842 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.564973 kubelet[3181]: W0117 00:28:01.564859 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.564973 kubelet[3181]: E0117 00:28:01.564880 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.565412 kubelet[3181]: E0117 00:28:01.565341 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.565412 kubelet[3181]: W0117 00:28:01.565354 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.565412 kubelet[3181]: E0117 00:28:01.565368 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.566044 kubelet[3181]: E0117 00:28:01.565831 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.566044 kubelet[3181]: W0117 00:28:01.565844 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.566044 kubelet[3181]: E0117 00:28:01.565857 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.566393 kubelet[3181]: E0117 00:28:01.566265 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.566393 kubelet[3181]: W0117 00:28:01.566283 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.566393 kubelet[3181]: E0117 00:28:01.566296 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.567179 kubelet[3181]: E0117 00:28:01.566766 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.567179 kubelet[3181]: W0117 00:28:01.566779 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.567179 kubelet[3181]: E0117 00:28:01.566793 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.567501 kubelet[3181]: E0117 00:28:01.567369 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.567501 kubelet[3181]: W0117 00:28:01.567382 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.567501 kubelet[3181]: E0117 00:28:01.567396 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.567867 kubelet[3181]: E0117 00:28:01.567686 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.567867 kubelet[3181]: W0117 00:28:01.567697 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.567867 kubelet[3181]: E0117 00:28:01.567709 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.568087 kubelet[3181]: E0117 00:28:01.568077 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.568156 kubelet[3181]: W0117 00:28:01.568147 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.568305 kubelet[3181]: E0117 00:28:01.568223 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.575363 kubelet[3181]: E0117 00:28:01.575337 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.575600 kubelet[3181]: W0117 00:28:01.575508 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.575600 kubelet[3181]: E0117 00:28:01.575536 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.576993 kubelet[3181]: E0117 00:28:01.576936 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.576993 kubelet[3181]: W0117 00:28:01.576955 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.576993 kubelet[3181]: E0117 00:28:01.576975 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.577656 kubelet[3181]: E0117 00:28:01.577501 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.577656 kubelet[3181]: W0117 00:28:01.577515 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.577656 kubelet[3181]: E0117 00:28:01.577527 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.590567 kubelet[3181]: E0117 00:28:01.590528 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.590567 kubelet[3181]: W0117 00:28:01.590564 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.592095 kubelet[3181]: E0117 00:28:01.590590 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.592095 kubelet[3181]: E0117 00:28:01.590909 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.592095 kubelet[3181]: W0117 00:28:01.590922 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.592095 kubelet[3181]: E0117 00:28:01.591091 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.592095 kubelet[3181]: E0117 00:28:01.591360 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.592095 kubelet[3181]: W0117 00:28:01.591373 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.592095 kubelet[3181]: E0117 00:28:01.591389 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.592095 kubelet[3181]: E0117 00:28:01.591785 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.592095 kubelet[3181]: W0117 00:28:01.591798 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.592095 kubelet[3181]: E0117 00:28:01.591813 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.592740 kubelet[3181]: E0117 00:28:01.592593 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.592740 kubelet[3181]: W0117 00:28:01.592607 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.592740 kubelet[3181]: E0117 00:28:01.592621 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.593146 kubelet[3181]: E0117 00:28:01.592881 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.593146 kubelet[3181]: W0117 00:28:01.592892 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.593146 kubelet[3181]: E0117 00:28:01.592905 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.593345 kubelet[3181]: E0117 00:28:01.593334 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.593408 kubelet[3181]: W0117 00:28:01.593398 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.593478 kubelet[3181]: E0117 00:28:01.593467 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.593797 kubelet[3181]: E0117 00:28:01.593784 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.593995 kubelet[3181]: W0117 00:28:01.593868 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.593995 kubelet[3181]: E0117 00:28:01.593896 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.594163 kubelet[3181]: E0117 00:28:01.594153 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.594233 kubelet[3181]: W0117 00:28:01.594222 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.594416 kubelet[3181]: E0117 00:28:01.594305 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.600776 kubelet[3181]: E0117 00:28:01.599339 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.600776 kubelet[3181]: W0117 00:28:01.599372 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.600776 kubelet[3181]: E0117 00:28:01.599397 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.600776 kubelet[3181]: E0117 00:28:01.600101 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.600776 kubelet[3181]: W0117 00:28:01.600119 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.600776 kubelet[3181]: E0117 00:28:01.600138 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.601177 kubelet[3181]: E0117 00:28:01.600842 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.601177 kubelet[3181]: W0117 00:28:01.600856 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.601177 kubelet[3181]: E0117 00:28:01.600871 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.602239 kubelet[3181]: E0117 00:28:01.602214 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.602239 kubelet[3181]: W0117 00:28:01.602237 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.602407 kubelet[3181]: E0117 00:28:01.602253 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.603233 kubelet[3181]: E0117 00:28:01.603209 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.603233 kubelet[3181]: W0117 00:28:01.603229 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.603377 kubelet[3181]: E0117 00:28:01.603245 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.603532 kubelet[3181]: E0117 00:28:01.603518 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:01.603591 kubelet[3181]: W0117 00:28:01.603533 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:01.603591 kubelet[3181]: E0117 00:28:01.603546 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:01.989827 containerd[1969]: time="2026-01-17T00:28:01.989730473Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:01.990659 containerd[1969]: time="2026-01-17T00:28:01.990607843Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 17 00:28:01.992492 containerd[1969]: time="2026-01-17T00:28:01.991815132Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:01.998218 containerd[1969]: time="2026-01-17T00:28:01.998173610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:01.999200 containerd[1969]: time="2026-01-17T00:28:01.999126726Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.702754389s" Jan 17 00:28:01.999403 containerd[1969]: time="2026-01-17T00:28:01.999204006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 17 00:28:02.019832 containerd[1969]: time="2026-01-17T00:28:02.019744076Z" level=info msg="CreateContainer within sandbox \"1384345fd643d200a8380e1fe6ca6ce0855e25a7460b1cbcf9eb40a442c83fd3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 00:28:02.122737 containerd[1969]: time="2026-01-17T00:28:02.118601207Z" level=info msg="CreateContainer within sandbox \"1384345fd643d200a8380e1fe6ca6ce0855e25a7460b1cbcf9eb40a442c83fd3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2f9f0a6c04d011e8d20b18398391ed4af278b44dd45f5aefa7e9a105f05c923f\"" Jan 17 00:28:02.123644 containerd[1969]: time="2026-01-17T00:28:02.123077626Z" level=info msg="StartContainer for \"2f9f0a6c04d011e8d20b18398391ed4af278b44dd45f5aefa7e9a105f05c923f\"" Jan 17 00:28:02.489401 kubelet[3181]: I0117 00:28:02.489370 3181 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:28:02.555559 kubelet[3181]: E0117 00:28:02.555308 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.555559 kubelet[3181]: W0117 00:28:02.555340 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.555559 kubelet[3181]: E0117 00:28:02.555478 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.556172 kubelet[3181]: E0117 00:28:02.556109 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.556172 kubelet[3181]: W0117 00:28:02.556132 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.556172 kubelet[3181]: E0117 00:28:02.556154 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.559299 kubelet[3181]: E0117 00:28:02.558906 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.559299 kubelet[3181]: W0117 00:28:02.558934 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.559299 kubelet[3181]: E0117 00:28:02.558977 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.559526 kubelet[3181]: E0117 00:28:02.559449 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.559526 kubelet[3181]: W0117 00:28:02.559474 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.559526 kubelet[3181]: E0117 00:28:02.559493 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.560609 kubelet[3181]: E0117 00:28:02.559970 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.560609 kubelet[3181]: W0117 00:28:02.559989 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.560609 kubelet[3181]: E0117 00:28:02.560004 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.561325 kubelet[3181]: E0117 00:28:02.561248 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.561325 kubelet[3181]: W0117 00:28:02.561264 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.561325 kubelet[3181]: E0117 00:28:02.561280 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.561913 kubelet[3181]: E0117 00:28:02.561694 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.561913 kubelet[3181]: W0117 00:28:02.561789 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.561913 kubelet[3181]: E0117 00:28:02.561807 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.584064 kubelet[3181]: E0117 00:28:02.572904 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.584064 kubelet[3181]: W0117 00:28:02.572934 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.584064 kubelet[3181]: E0117 00:28:02.572988 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.584064 kubelet[3181]: E0117 00:28:02.573474 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.584064 kubelet[3181]: W0117 00:28:02.573491 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.584064 kubelet[3181]: E0117 00:28:02.573512 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.584064 kubelet[3181]: E0117 00:28:02.579666 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.584064 kubelet[3181]: W0117 00:28:02.579695 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.584064 kubelet[3181]: E0117 00:28:02.579723 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.584064 kubelet[3181]: E0117 00:28:02.580011 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.584593 kubelet[3181]: W0117 00:28:02.580027 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.584593 kubelet[3181]: E0117 00:28:02.580043 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.584593 kubelet[3181]: E0117 00:28:02.580971 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.584593 kubelet[3181]: W0117 00:28:02.580987 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.584593 kubelet[3181]: E0117 00:28:02.581004 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.584593 kubelet[3181]: E0117 00:28:02.581509 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.584593 kubelet[3181]: W0117 00:28:02.581524 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.584593 kubelet[3181]: E0117 00:28:02.581560 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.584593 kubelet[3181]: E0117 00:28:02.584266 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.584593 kubelet[3181]: W0117 00:28:02.584287 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.585069 kubelet[3181]: E0117 00:28:02.584310 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.585069 kubelet[3181]: E0117 00:28:02.584542 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.585069 kubelet[3181]: W0117 00:28:02.584552 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.585069 kubelet[3181]: E0117 00:28:02.584564 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.585069 kubelet[3181]: E0117 00:28:02.584940 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.585069 kubelet[3181]: W0117 00:28:02.584952 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.585069 kubelet[3181]: E0117 00:28:02.584966 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.585380 kubelet[3181]: E0117 00:28:02.585264 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.585380 kubelet[3181]: W0117 00:28:02.585275 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.585380 kubelet[3181]: E0117 00:28:02.585286 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.586653 kubelet[3181]: E0117 00:28:02.585555 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.586653 kubelet[3181]: W0117 00:28:02.585571 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.586653 kubelet[3181]: E0117 00:28:02.585583 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.586653 kubelet[3181]: E0117 00:28:02.585927 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.586653 kubelet[3181]: W0117 00:28:02.585938 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.586653 kubelet[3181]: E0117 00:28:02.585957 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.586653 kubelet[3181]: E0117 00:28:02.586267 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.586653 kubelet[3181]: W0117 00:28:02.586278 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.586653 kubelet[3181]: E0117 00:28:02.586292 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.587510 kubelet[3181]: E0117 00:28:02.587000 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.587510 kubelet[3181]: W0117 00:28:02.587015 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.595501 kubelet[3181]: E0117 00:28:02.587032 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.604680 kubelet[3181]: E0117 00:28:02.604209 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.604680 kubelet[3181]: W0117 00:28:02.604242 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.604680 kubelet[3181]: E0117 00:28:02.604291 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.604943 kubelet[3181]: E0117 00:28:02.604802 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.604943 kubelet[3181]: W0117 00:28:02.604816 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.604943 kubelet[3181]: E0117 00:28:02.604837 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.605811 kubelet[3181]: E0117 00:28:02.605252 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.605811 kubelet[3181]: W0117 00:28:02.605267 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.605811 kubelet[3181]: E0117 00:28:02.605306 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.605811 kubelet[3181]: E0117 00:28:02.605800 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.605811 kubelet[3181]: W0117 00:28:02.605812 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.606098 kubelet[3181]: E0117 00:28:02.605827 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.607501 kubelet[3181]: E0117 00:28:02.607468 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.607501 kubelet[3181]: W0117 00:28:02.607488 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.607501 kubelet[3181]: E0117 00:28:02.607507 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.617814 kubelet[3181]: E0117 00:28:02.608271 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.617814 kubelet[3181]: W0117 00:28:02.608305 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.617814 kubelet[3181]: E0117 00:28:02.608323 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.619082 kubelet[3181]: E0117 00:28:02.618339 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.619082 kubelet[3181]: W0117 00:28:02.618368 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.619082 kubelet[3181]: E0117 00:28:02.618391 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.619082 kubelet[3181]: E0117 00:28:02.618849 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.619082 kubelet[3181]: W0117 00:28:02.618863 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.619082 kubelet[3181]: E0117 00:28:02.618879 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.620172 kubelet[3181]: E0117 00:28:02.619899 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.620172 kubelet[3181]: W0117 00:28:02.619913 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.620172 kubelet[3181]: E0117 00:28:02.619938 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.621479 kubelet[3181]: E0117 00:28:02.620487 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.621479 kubelet[3181]: W0117 00:28:02.621453 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.621688 kubelet[3181]: E0117 00:28:02.621524 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.622051 kubelet[3181]: E0117 00:28:02.622035 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.622458 kubelet[3181]: W0117 00:28:02.622143 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.622458 kubelet[3181]: E0117 00:28:02.622163 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.622228 systemd[1]: Started cri-containerd-2f9f0a6c04d011e8d20b18398391ed4af278b44dd45f5aefa7e9a105f05c923f.scope - libcontainer container 2f9f0a6c04d011e8d20b18398391ed4af278b44dd45f5aefa7e9a105f05c923f. Jan 17 00:28:02.623954 kubelet[3181]: E0117 00:28:02.622946 3181 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:28:02.623954 kubelet[3181]: W0117 00:28:02.622958 3181 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:28:02.623954 kubelet[3181]: E0117 00:28:02.622974 3181 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:28:02.751374 containerd[1969]: time="2026-01-17T00:28:02.751254831Z" level=info msg="StartContainer for \"2f9f0a6c04d011e8d20b18398391ed4af278b44dd45f5aefa7e9a105f05c923f\" returns successfully" Jan 17 00:28:02.785834 systemd[1]: cri-containerd-2f9f0a6c04d011e8d20b18398391ed4af278b44dd45f5aefa7e9a105f05c923f.scope: Deactivated successfully. Jan 17 00:28:03.142655 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f9f0a6c04d011e8d20b18398391ed4af278b44dd45f5aefa7e9a105f05c923f-rootfs.mount: Deactivated successfully. Jan 17 00:28:03.224424 containerd[1969]: time="2026-01-17T00:28:03.147224977Z" level=info msg="shim disconnected" id=2f9f0a6c04d011e8d20b18398391ed4af278b44dd45f5aefa7e9a105f05c923f namespace=k8s.io Jan 17 00:28:03.224953 containerd[1969]: time="2026-01-17T00:28:03.224423487Z" level=warning msg="cleaning up after shim disconnected" id=2f9f0a6c04d011e8d20b18398391ed4af278b44dd45f5aefa7e9a105f05c923f namespace=k8s.io Jan 17 00:28:03.224953 containerd[1969]: time="2026-01-17T00:28:03.224449083Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:28:03.267054 kubelet[3181]: E0117 00:28:03.266991 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5p9mr" podUID="c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9" Jan 17 00:28:03.489982 containerd[1969]: time="2026-01-17T00:28:03.489944081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 17 00:28:05.269355 kubelet[3181]: E0117 00:28:05.267934 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5p9mr" podUID="c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9" Jan 17 00:28:06.620568 containerd[1969]: time="2026-01-17T00:28:06.620513567Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:06.622404 containerd[1969]: time="2026-01-17T00:28:06.622346713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 17 00:28:06.624687 containerd[1969]: time="2026-01-17T00:28:06.624632338Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:06.628425 containerd[1969]: time="2026-01-17T00:28:06.628017545Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:06.630488 containerd[1969]: time="2026-01-17T00:28:06.629916759Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.139930279s" Jan 17 00:28:06.630488 containerd[1969]: time="2026-01-17T00:28:06.629949395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 17 00:28:06.635969 containerd[1969]: time="2026-01-17T00:28:06.635925229Z" level=info msg="CreateContainer within sandbox \"1384345fd643d200a8380e1fe6ca6ce0855e25a7460b1cbcf9eb40a442c83fd3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 00:28:06.662137 containerd[1969]: time="2026-01-17T00:28:06.662085434Z" level=info msg="CreateContainer within sandbox \"1384345fd643d200a8380e1fe6ca6ce0855e25a7460b1cbcf9eb40a442c83fd3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a8f18c4583009a090cccd24d0a231c566ed675237982627843b30ec5f6cfd429\"" Jan 17 00:28:06.663975 containerd[1969]: time="2026-01-17T00:28:06.662612046Z" level=info msg="StartContainer for \"a8f18c4583009a090cccd24d0a231c566ed675237982627843b30ec5f6cfd429\"" Jan 17 00:28:06.691105 systemd[1]: run-containerd-runc-k8s.io-a8f18c4583009a090cccd24d0a231c566ed675237982627843b30ec5f6cfd429-runc.smTwRH.mount: Deactivated successfully. Jan 17 00:28:06.697941 systemd[1]: Started cri-containerd-a8f18c4583009a090cccd24d0a231c566ed675237982627843b30ec5f6cfd429.scope - libcontainer container a8f18c4583009a090cccd24d0a231c566ed675237982627843b30ec5f6cfd429. Jan 17 00:28:06.731727 containerd[1969]: time="2026-01-17T00:28:06.731683548Z" level=info msg="StartContainer for \"a8f18c4583009a090cccd24d0a231c566ed675237982627843b30ec5f6cfd429\" returns successfully" Jan 17 00:28:07.268701 kubelet[3181]: E0117 00:28:07.267389 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5p9mr" podUID="c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9" Jan 17 00:28:08.031714 systemd[1]: cri-containerd-a8f18c4583009a090cccd24d0a231c566ed675237982627843b30ec5f6cfd429.scope: Deactivated successfully. Jan 17 00:28:08.096993 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8f18c4583009a090cccd24d0a231c566ed675237982627843b30ec5f6cfd429-rootfs.mount: Deactivated successfully. Jan 17 00:28:08.104397 containerd[1969]: time="2026-01-17T00:28:08.104323647Z" level=info msg="shim disconnected" id=a8f18c4583009a090cccd24d0a231c566ed675237982627843b30ec5f6cfd429 namespace=k8s.io Jan 17 00:28:08.105317 containerd[1969]: time="2026-01-17T00:28:08.104988170Z" level=warning msg="cleaning up after shim disconnected" id=a8f18c4583009a090cccd24d0a231c566ed675237982627843b30ec5f6cfd429 namespace=k8s.io Jan 17 00:28:08.105317 containerd[1969]: time="2026-01-17T00:28:08.105018317Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:28:08.158231 kubelet[3181]: I0117 00:28:08.158197 3181 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:28:08.287293 kubelet[3181]: I0117 00:28:08.285081 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfl7v\" (UniqueName: \"kubernetes.io/projected/dc3409d6-ff21-405a-b461-9d804b643b66-kube-api-access-lfl7v\") pod \"coredns-674b8bbfcf-wldcl\" (UID: \"dc3409d6-ff21-405a-b461-9d804b643b66\") " pod="kube-system/coredns-674b8bbfcf-wldcl" Jan 17 00:28:08.287293 kubelet[3181]: I0117 00:28:08.285146 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0b97d5ab-19c5-4717-a6ca-1a7a01547f6c-calico-apiserver-certs\") pod \"calico-apiserver-6747446b5-k9mxk\" (UID: \"0b97d5ab-19c5-4717-a6ca-1a7a01547f6c\") " pod="calico-apiserver/calico-apiserver-6747446b5-k9mxk" Jan 17 00:28:08.287293 kubelet[3181]: I0117 00:28:08.285181 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/394e468b-e5d2-4096-94d5-a6a60d966235-config\") pod \"goldmane-666569f655-6bww6\" (UID: \"394e468b-e5d2-4096-94d5-a6a60d966235\") " pod="calico-system/goldmane-666569f655-6bww6" Jan 17 00:28:08.287293 kubelet[3181]: I0117 00:28:08.285223 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc3409d6-ff21-405a-b461-9d804b643b66-config-volume\") pod \"coredns-674b8bbfcf-wldcl\" (UID: \"dc3409d6-ff21-405a-b461-9d804b643b66\") " pod="kube-system/coredns-674b8bbfcf-wldcl" Jan 17 00:28:08.287293 kubelet[3181]: I0117 00:28:08.285252 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/394e468b-e5d2-4096-94d5-a6a60d966235-goldmane-ca-bundle\") pod \"goldmane-666569f655-6bww6\" (UID: \"394e468b-e5d2-4096-94d5-a6a60d966235\") " pod="calico-system/goldmane-666569f655-6bww6" Jan 17 00:28:08.289021 kubelet[3181]: I0117 00:28:08.285293 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc8vm\" (UniqueName: \"kubernetes.io/projected/394e468b-e5d2-4096-94d5-a6a60d966235-kube-api-access-zc8vm\") pod \"goldmane-666569f655-6bww6\" (UID: \"394e468b-e5d2-4096-94d5-a6a60d966235\") " pod="calico-system/goldmane-666569f655-6bww6" Jan 17 00:28:08.289021 kubelet[3181]: I0117 00:28:08.285319 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6f9f\" (UniqueName: \"kubernetes.io/projected/0b97d5ab-19c5-4717-a6ca-1a7a01547f6c-kube-api-access-n6f9f\") pod \"calico-apiserver-6747446b5-k9mxk\" (UID: \"0b97d5ab-19c5-4717-a6ca-1a7a01547f6c\") " pod="calico-apiserver/calico-apiserver-6747446b5-k9mxk" Jan 17 00:28:08.289021 kubelet[3181]: I0117 00:28:08.285378 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/394e468b-e5d2-4096-94d5-a6a60d966235-goldmane-key-pair\") pod \"goldmane-666569f655-6bww6\" (UID: \"394e468b-e5d2-4096-94d5-a6a60d966235\") " pod="calico-system/goldmane-666569f655-6bww6" Jan 17 00:28:08.288407 systemd[1]: Created slice kubepods-burstable-poddc3409d6_ff21_405a_b461_9d804b643b66.slice - libcontainer container kubepods-burstable-poddc3409d6_ff21_405a_b461_9d804b643b66.slice. Jan 17 00:28:08.309390 systemd[1]: Created slice kubepods-burstable-pod67a190ca_72c5_48e2_b272_116175d17788.slice - libcontainer container kubepods-burstable-pod67a190ca_72c5_48e2_b272_116175d17788.slice. Jan 17 00:28:08.323471 systemd[1]: Created slice kubepods-besteffort-pod0b97d5ab_19c5_4717_a6ca_1a7a01547f6c.slice - libcontainer container kubepods-besteffort-pod0b97d5ab_19c5_4717_a6ca_1a7a01547f6c.slice. Jan 17 00:28:08.332356 systemd[1]: Created slice kubepods-besteffort-podcc407ca1_a787_4c80_b23e_a6c88347fad4.slice - libcontainer container kubepods-besteffort-podcc407ca1_a787_4c80_b23e_a6c88347fad4.slice. Jan 17 00:28:08.340842 systemd[1]: Created slice kubepods-besteffort-pode4d1bdfe_e288_4dee_b980_bbf4550bf441.slice - libcontainer container kubepods-besteffort-pode4d1bdfe_e288_4dee_b980_bbf4550bf441.slice. Jan 17 00:28:08.350563 systemd[1]: Created slice kubepods-besteffort-pod394e468b_e5d2_4096_94d5_a6a60d966235.slice - libcontainer container kubepods-besteffort-pod394e468b_e5d2_4096_94d5_a6a60d966235.slice. Jan 17 00:28:08.364670 systemd[1]: Created slice kubepods-besteffort-pod5fd74b61_87d1_45e4_b949_57645e5eb510.slice - libcontainer container kubepods-besteffort-pod5fd74b61_87d1_45e4_b949_57645e5eb510.slice. Jan 17 00:28:08.386897 kubelet[3181]: I0117 00:28:08.386849 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e4d1bdfe-e288-4dee-b980-bbf4550bf441-whisker-backend-key-pair\") pod \"whisker-68745d549-cbm5w\" (UID: \"e4d1bdfe-e288-4dee-b980-bbf4550bf441\") " pod="calico-system/whisker-68745d549-cbm5w" Jan 17 00:28:08.387204 kubelet[3181]: I0117 00:28:08.386977 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4d1bdfe-e288-4dee-b980-bbf4550bf441-whisker-ca-bundle\") pod \"whisker-68745d549-cbm5w\" (UID: \"e4d1bdfe-e288-4dee-b980-bbf4550bf441\") " pod="calico-system/whisker-68745d549-cbm5w" Jan 17 00:28:08.387204 kubelet[3181]: I0117 00:28:08.387056 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cc407ca1-a787-4c80-b23e-a6c88347fad4-calico-apiserver-certs\") pod \"calico-apiserver-6747446b5-7hcx6\" (UID: \"cc407ca1-a787-4c80-b23e-a6c88347fad4\") " pod="calico-apiserver/calico-apiserver-6747446b5-7hcx6" Jan 17 00:28:08.387204 kubelet[3181]: I0117 00:28:08.387085 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgh44\" (UniqueName: \"kubernetes.io/projected/e4d1bdfe-e288-4dee-b980-bbf4550bf441-kube-api-access-cgh44\") pod \"whisker-68745d549-cbm5w\" (UID: \"e4d1bdfe-e288-4dee-b980-bbf4550bf441\") " pod="calico-system/whisker-68745d549-cbm5w" Jan 17 00:28:08.387204 kubelet[3181]: I0117 00:28:08.387132 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67a190ca-72c5-48e2-b272-116175d17788-config-volume\") pod \"coredns-674b8bbfcf-6m4tc\" (UID: \"67a190ca-72c5-48e2-b272-116175d17788\") " pod="kube-system/coredns-674b8bbfcf-6m4tc" Jan 17 00:28:08.387204 kubelet[3181]: I0117 00:28:08.387156 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zjp4\" (UniqueName: \"kubernetes.io/projected/67a190ca-72c5-48e2-b272-116175d17788-kube-api-access-7zjp4\") pod \"coredns-674b8bbfcf-6m4tc\" (UID: \"67a190ca-72c5-48e2-b272-116175d17788\") " pod="kube-system/coredns-674b8bbfcf-6m4tc" Jan 17 00:28:08.387448 kubelet[3181]: I0117 00:28:08.387179 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnjs6\" (UniqueName: \"kubernetes.io/projected/5fd74b61-87d1-45e4-b949-57645e5eb510-kube-api-access-cnjs6\") pod \"calico-kube-controllers-877bf5958-fmwqm\" (UID: \"5fd74b61-87d1-45e4-b949-57645e5eb510\") " pod="calico-system/calico-kube-controllers-877bf5958-fmwqm" Jan 17 00:28:08.387448 kubelet[3181]: I0117 00:28:08.387207 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fd74b61-87d1-45e4-b949-57645e5eb510-tigera-ca-bundle\") pod \"calico-kube-controllers-877bf5958-fmwqm\" (UID: \"5fd74b61-87d1-45e4-b949-57645e5eb510\") " pod="calico-system/calico-kube-controllers-877bf5958-fmwqm" Jan 17 00:28:08.387448 kubelet[3181]: I0117 00:28:08.387230 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlgp8\" (UniqueName: \"kubernetes.io/projected/cc407ca1-a787-4c80-b23e-a6c88347fad4-kube-api-access-xlgp8\") pod \"calico-apiserver-6747446b5-7hcx6\" (UID: \"cc407ca1-a787-4c80-b23e-a6c88347fad4\") " pod="calico-apiserver/calico-apiserver-6747446b5-7hcx6" Jan 17 00:28:08.523047 containerd[1969]: time="2026-01-17T00:28:08.522320583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 17 00:28:08.621506 containerd[1969]: time="2026-01-17T00:28:08.620986323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6m4tc,Uid:67a190ca-72c5-48e2-b272-116175d17788,Namespace:kube-system,Attempt:0,}" Jan 17 00:28:08.621506 containerd[1969]: time="2026-01-17T00:28:08.621088691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wldcl,Uid:dc3409d6-ff21-405a-b461-9d804b643b66,Namespace:kube-system,Attempt:0,}" Jan 17 00:28:08.630335 containerd[1969]: time="2026-01-17T00:28:08.630293226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6747446b5-k9mxk,Uid:0b97d5ab-19c5-4717-a6ca-1a7a01547f6c,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:28:08.648020 containerd[1969]: time="2026-01-17T00:28:08.647221767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68745d549-cbm5w,Uid:e4d1bdfe-e288-4dee-b980-bbf4550bf441,Namespace:calico-system,Attempt:0,}" Jan 17 00:28:08.648020 containerd[1969]: time="2026-01-17T00:28:08.647632747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6747446b5-7hcx6,Uid:cc407ca1-a787-4c80-b23e-a6c88347fad4,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:28:08.671607 containerd[1969]: time="2026-01-17T00:28:08.671561960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-877bf5958-fmwqm,Uid:5fd74b61-87d1-45e4-b949-57645e5eb510,Namespace:calico-system,Attempt:0,}" Jan 17 00:28:08.672792 containerd[1969]: time="2026-01-17T00:28:08.672514929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-6bww6,Uid:394e468b-e5d2-4096-94d5-a6a60d966235,Namespace:calico-system,Attempt:0,}" Jan 17 00:28:09.080350 containerd[1969]: time="2026-01-17T00:28:09.080281390Z" level=error msg="Failed to destroy network for sandbox \"6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.082448 containerd[1969]: time="2026-01-17T00:28:09.081103627Z" level=error msg="Failed to destroy network for sandbox \"fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.091778 containerd[1969]: time="2026-01-17T00:28:09.089737558Z" level=error msg="Failed to destroy network for sandbox \"a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.092077 containerd[1969]: time="2026-01-17T00:28:09.089876964Z" level=error msg="Failed to destroy network for sandbox \"82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.093001 containerd[1969]: time="2026-01-17T00:28:09.092962507Z" level=error msg="encountered an error cleaning up failed sandbox \"82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.094994 containerd[1969]: time="2026-01-17T00:28:09.094889357Z" level=error msg="Failed to destroy network for sandbox \"132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.095494 containerd[1969]: time="2026-01-17T00:28:09.095457215Z" level=error msg="encountered an error cleaning up failed sandbox \"a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.096157 containerd[1969]: time="2026-01-17T00:28:09.096111246Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-877bf5958-fmwqm,Uid:5fd74b61-87d1-45e4-b949-57645e5eb510,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.096544 containerd[1969]: time="2026-01-17T00:28:09.095870597Z" level=error msg="encountered an error cleaning up failed sandbox \"132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.097010 containerd[1969]: time="2026-01-17T00:28:09.096980332Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6747446b5-k9mxk,Uid:0b97d5ab-19c5-4717-a6ca-1a7a01547f6c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.116926 containerd[1969]: time="2026-01-17T00:28:09.095915379Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68745d549-cbm5w,Uid:e4d1bdfe-e288-4dee-b980-bbf4550bf441,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.120108 containerd[1969]: time="2026-01-17T00:28:09.089947922Z" level=error msg="Failed to destroy network for sandbox \"165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.120108 containerd[1969]: time="2026-01-17T00:28:09.090044237Z" level=error msg="Failed to destroy network for sandbox \"23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.120995 containerd[1969]: time="2026-01-17T00:28:09.120643855Z" level=error msg="encountered an error cleaning up failed sandbox \"165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.120995 containerd[1969]: time="2026-01-17T00:28:09.120712074Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wldcl,Uid:dc3409d6-ff21-405a-b461-9d804b643b66,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.120995 containerd[1969]: time="2026-01-17T00:28:09.090056339Z" level=error msg="encountered an error cleaning up failed sandbox \"6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.120995 containerd[1969]: time="2026-01-17T00:28:09.120849209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6747446b5-7hcx6,Uid:cc407ca1-a787-4c80-b23e-a6c88347fad4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.120995 containerd[1969]: time="2026-01-17T00:28:09.090061947Z" level=error msg="encountered an error cleaning up failed sandbox \"fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.120995 containerd[1969]: time="2026-01-17T00:28:09.120942144Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6m4tc,Uid:67a190ca-72c5-48e2-b272-116175d17788,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.123777 containerd[1969]: time="2026-01-17T00:28:09.121631175Z" level=error msg="encountered an error cleaning up failed sandbox \"23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.123777 containerd[1969]: time="2026-01-17T00:28:09.121694650Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-6bww6,Uid:394e468b-e5d2-4096-94d5-a6a60d966235,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.134305 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa-shm.mount: Deactivated successfully. Jan 17 00:28:09.134446 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800-shm.mount: Deactivated successfully. Jan 17 00:28:09.134533 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402-shm.mount: Deactivated successfully. Jan 17 00:28:09.134616 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c-shm.mount: Deactivated successfully. Jan 17 00:28:09.134700 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f-shm.mount: Deactivated successfully. Jan 17 00:28:09.140320 kubelet[3181]: E0117 00:28:09.140260 3181 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.141783 kubelet[3181]: E0117 00:28:09.135981 3181 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.142694 kubelet[3181]: E0117 00:28:09.142648 3181 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-6bww6" Jan 17 00:28:09.142812 kubelet[3181]: E0117 00:28:09.142723 3181 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-6bww6" Jan 17 00:28:09.142872 kubelet[3181]: E0117 00:28:09.142818 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-6bww6_calico-system(394e468b-e5d2-4096-94d5-a6a60d966235)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-6bww6_calico-system(394e468b-e5d2-4096-94d5-a6a60d966235)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-6bww6" podUID="394e468b-e5d2-4096-94d5-a6a60d966235" Jan 17 00:28:09.145174 kubelet[3181]: E0117 00:28:09.132782 3181 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.145174 kubelet[3181]: E0117 00:28:09.143943 3181 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-877bf5958-fmwqm" Jan 17 00:28:09.145174 kubelet[3181]: E0117 00:28:09.143974 3181 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-877bf5958-fmwqm" Jan 17 00:28:09.145354 kubelet[3181]: E0117 00:28:09.144031 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-877bf5958-fmwqm_calico-system(5fd74b61-87d1-45e4-b949-57645e5eb510)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-877bf5958-fmwqm_calico-system(5fd74b61-87d1-45e4-b949-57645e5eb510)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-877bf5958-fmwqm" podUID="5fd74b61-87d1-45e4-b949-57645e5eb510" Jan 17 00:28:09.145354 kubelet[3181]: E0117 00:28:09.144098 3181 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.145354 kubelet[3181]: E0117 00:28:09.144125 3181 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wldcl" Jan 17 00:28:09.145491 kubelet[3181]: E0117 00:28:09.144146 3181 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wldcl" Jan 17 00:28:09.145491 kubelet[3181]: E0117 00:28:09.144182 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-wldcl_kube-system(dc3409d6-ff21-405a-b461-9d804b643b66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-wldcl_kube-system(dc3409d6-ff21-405a-b461-9d804b643b66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-wldcl" podUID="dc3409d6-ff21-405a-b461-9d804b643b66" Jan 17 00:28:09.145491 kubelet[3181]: E0117 00:28:09.144217 3181 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.145614 kubelet[3181]: E0117 00:28:09.144239 3181 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-68745d549-cbm5w" Jan 17 00:28:09.145614 kubelet[3181]: E0117 00:28:09.144259 3181 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-68745d549-cbm5w" Jan 17 00:28:09.145614 kubelet[3181]: E0117 00:28:09.144293 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-68745d549-cbm5w_calico-system(e4d1bdfe-e288-4dee-b980-bbf4550bf441)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-68745d549-cbm5w_calico-system(e4d1bdfe-e288-4dee-b980-bbf4550bf441)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-68745d549-cbm5w" podUID="e4d1bdfe-e288-4dee-b980-bbf4550bf441" Jan 17 00:28:09.145727 kubelet[3181]: E0117 00:28:09.144326 3181 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.145727 kubelet[3181]: E0117 00:28:09.144350 3181 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6747446b5-7hcx6" Jan 17 00:28:09.145727 kubelet[3181]: E0117 00:28:09.144368 3181 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6747446b5-7hcx6" Jan 17 00:28:09.145849 kubelet[3181]: E0117 00:28:09.144408 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6747446b5-7hcx6_calico-apiserver(cc407ca1-a787-4c80-b23e-a6c88347fad4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6747446b5-7hcx6_calico-apiserver(cc407ca1-a787-4c80-b23e-a6c88347fad4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6747446b5-7hcx6" podUID="cc407ca1-a787-4c80-b23e-a6c88347fad4" Jan 17 00:28:09.145849 kubelet[3181]: E0117 00:28:09.144444 3181 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.145849 kubelet[3181]: E0117 00:28:09.144469 3181 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6m4tc" Jan 17 00:28:09.146018 kubelet[3181]: E0117 00:28:09.144484 3181 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6m4tc" Jan 17 00:28:09.146018 kubelet[3181]: E0117 00:28:09.144518 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6m4tc_kube-system(67a190ca-72c5-48e2-b272-116175d17788)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6m4tc_kube-system(67a190ca-72c5-48e2-b272-116175d17788)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6m4tc" podUID="67a190ca-72c5-48e2-b272-116175d17788" Jan 17 00:28:09.151915 kubelet[3181]: E0117 00:28:09.142643 3181 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6747446b5-k9mxk" Jan 17 00:28:09.151915 kubelet[3181]: E0117 00:28:09.151798 3181 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6747446b5-k9mxk" Jan 17 00:28:09.151915 kubelet[3181]: E0117 00:28:09.151858 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6747446b5-k9mxk_calico-apiserver(0b97d5ab-19c5-4717-a6ca-1a7a01547f6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6747446b5-k9mxk_calico-apiserver(0b97d5ab-19c5-4717-a6ca-1a7a01547f6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6747446b5-k9mxk" podUID="0b97d5ab-19c5-4717-a6ca-1a7a01547f6c" Jan 17 00:28:09.276574 systemd[1]: Created slice kubepods-besteffort-podc5cbb1a7_a8a6_481d_bf9e_6f05e0da26d9.slice - libcontainer container kubepods-besteffort-podc5cbb1a7_a8a6_481d_bf9e_6f05e0da26d9.slice. Jan 17 00:28:09.280277 containerd[1969]: time="2026-01-17T00:28:09.279634993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5p9mr,Uid:c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9,Namespace:calico-system,Attempt:0,}" Jan 17 00:28:09.348836 containerd[1969]: time="2026-01-17T00:28:09.347071867Z" level=error msg="Failed to destroy network for sandbox \"aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.348836 containerd[1969]: time="2026-01-17T00:28:09.347425802Z" level=error msg="encountered an error cleaning up failed sandbox \"aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.348836 containerd[1969]: time="2026-01-17T00:28:09.347489835Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5p9mr,Uid:c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.350020 kubelet[3181]: E0117 00:28:09.349885 3181 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.350020 kubelet[3181]: E0117 00:28:09.349952 3181 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5p9mr" Jan 17 00:28:09.350020 kubelet[3181]: E0117 00:28:09.349983 3181 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5p9mr" Jan 17 00:28:09.354014 kubelet[3181]: E0117 00:28:09.350057 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5p9mr_calico-system(c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5p9mr_calico-system(c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5p9mr" podUID="c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9" Jan 17 00:28:09.353586 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2-shm.mount: Deactivated successfully. Jan 17 00:28:09.522876 kubelet[3181]: I0117 00:28:09.522827 3181 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" Jan 17 00:28:09.524096 kubelet[3181]: I0117 00:28:09.524071 3181 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" Jan 17 00:28:09.544730 kubelet[3181]: I0117 00:28:09.544584 3181 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" Jan 17 00:28:09.546106 kubelet[3181]: I0117 00:28:09.545922 3181 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" Jan 17 00:28:09.548397 kubelet[3181]: I0117 00:28:09.548363 3181 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" Jan 17 00:28:09.550104 kubelet[3181]: I0117 00:28:09.549501 3181 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" Jan 17 00:28:09.557231 containerd[1969]: time="2026-01-17T00:28:09.556802690Z" level=info msg="StopPodSandbox for \"165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f\"" Jan 17 00:28:09.558698 kubelet[3181]: I0117 00:28:09.557847 3181 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" Jan 17 00:28:09.558829 containerd[1969]: time="2026-01-17T00:28:09.558276672Z" level=info msg="StopPodSandbox for \"a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800\"" Jan 17 00:28:09.558829 containerd[1969]: time="2026-01-17T00:28:09.558731798Z" level=info msg="Ensure that sandbox a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800 in task-service has been cleanup successfully" Jan 17 00:28:09.558936 containerd[1969]: time="2026-01-17T00:28:09.558912560Z" level=info msg="Ensure that sandbox 165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f in task-service has been cleanup successfully" Jan 17 00:28:09.559906 containerd[1969]: time="2026-01-17T00:28:09.557195027Z" level=info msg="StopPodSandbox for \"132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c\"" Jan 17 00:28:09.560105 containerd[1969]: time="2026-01-17T00:28:09.560081269Z" level=info msg="StopPodSandbox for \"fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60\"" Jan 17 00:28:09.560263 containerd[1969]: time="2026-01-17T00:28:09.560245226Z" level=info msg="Ensure that sandbox fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60 in task-service has been cleanup successfully" Jan 17 00:28:09.560365 containerd[1969]: time="2026-01-17T00:28:09.560350504Z" level=info msg="Ensure that sandbox 132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c in task-service has been cleanup successfully" Jan 17 00:28:09.560761 containerd[1969]: time="2026-01-17T00:28:09.557511262Z" level=info msg="StopPodSandbox for \"23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa\"" Jan 17 00:28:09.562416 containerd[1969]: time="2026-01-17T00:28:09.562393436Z" level=info msg="Ensure that sandbox 23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa in task-service has been cleanup successfully" Jan 17 00:28:09.565245 containerd[1969]: time="2026-01-17T00:28:09.557444216Z" level=info msg="StopPodSandbox for \"6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831\"" Jan 17 00:28:09.565418 containerd[1969]: time="2026-01-17T00:28:09.565399423Z" level=info msg="Ensure that sandbox 6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831 in task-service has been cleanup successfully" Jan 17 00:28:09.565674 containerd[1969]: time="2026-01-17T00:28:09.557486238Z" level=info msg="StopPodSandbox for \"aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2\"" Jan 17 00:28:09.565959 containerd[1969]: time="2026-01-17T00:28:09.565921631Z" level=info msg="Ensure that sandbox aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2 in task-service has been cleanup successfully" Jan 17 00:28:09.566654 kubelet[3181]: I0117 00:28:09.566629 3181 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" Jan 17 00:28:09.567968 containerd[1969]: time="2026-01-17T00:28:09.567949692Z" level=info msg="StopPodSandbox for \"82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402\"" Jan 17 00:28:09.568518 containerd[1969]: time="2026-01-17T00:28:09.568452764Z" level=info msg="Ensure that sandbox 82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402 in task-service has been cleanup successfully" Jan 17 00:28:09.643179 containerd[1969]: time="2026-01-17T00:28:09.642884391Z" level=error msg="StopPodSandbox for \"a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800\" failed" error="failed to destroy network for sandbox \"a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.646261 kubelet[3181]: E0117 00:28:09.643257 3181 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" Jan 17 00:28:09.646442 kubelet[3181]: E0117 00:28:09.646290 3181 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800"} Jan 17 00:28:09.646442 kubelet[3181]: E0117 00:28:09.646350 3181 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5fd74b61-87d1-45e4-b949-57645e5eb510\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:28:09.646442 kubelet[3181]: E0117 00:28:09.646372 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5fd74b61-87d1-45e4-b949-57645e5eb510\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-877bf5958-fmwqm" podUID="5fd74b61-87d1-45e4-b949-57645e5eb510" Jan 17 00:28:09.654468 containerd[1969]: time="2026-01-17T00:28:09.654409483Z" level=error msg="StopPodSandbox for \"6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831\" failed" error="failed to destroy network for sandbox \"6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.654965 kubelet[3181]: E0117 00:28:09.654624 3181 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" Jan 17 00:28:09.654965 kubelet[3181]: E0117 00:28:09.654670 3181 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831"} Jan 17 00:28:09.654965 kubelet[3181]: E0117 00:28:09.654793 3181 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cc407ca1-a787-4c80-b23e-a6c88347fad4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:28:09.654965 kubelet[3181]: E0117 00:28:09.654817 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cc407ca1-a787-4c80-b23e-a6c88347fad4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6747446b5-7hcx6" podUID="cc407ca1-a787-4c80-b23e-a6c88347fad4" Jan 17 00:28:09.701622 containerd[1969]: time="2026-01-17T00:28:09.701570123Z" level=error msg="StopPodSandbox for \"165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f\" failed" error="failed to destroy network for sandbox \"165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.701961 kubelet[3181]: E0117 00:28:09.701822 3181 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" Jan 17 00:28:09.701961 kubelet[3181]: E0117 00:28:09.701871 3181 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f"} Jan 17 00:28:09.701961 kubelet[3181]: E0117 00:28:09.701904 3181 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dc3409d6-ff21-405a-b461-9d804b643b66\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:28:09.701961 kubelet[3181]: E0117 00:28:09.701931 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dc3409d6-ff21-405a-b461-9d804b643b66\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-wldcl" podUID="dc3409d6-ff21-405a-b461-9d804b643b66" Jan 17 00:28:09.707058 containerd[1969]: time="2026-01-17T00:28:09.707012221Z" level=error msg="StopPodSandbox for \"aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2\" failed" error="failed to destroy network for sandbox \"aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.707377 kubelet[3181]: E0117 00:28:09.707239 3181 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" Jan 17 00:28:09.707377 kubelet[3181]: E0117 00:28:09.707291 3181 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2"} Jan 17 00:28:09.707377 kubelet[3181]: E0117 00:28:09.707326 3181 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:28:09.707377 kubelet[3181]: E0117 00:28:09.707350 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5p9mr" podUID="c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9" Jan 17 00:28:09.708761 containerd[1969]: time="2026-01-17T00:28:09.708712662Z" level=error msg="StopPodSandbox for \"23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa\" failed" error="failed to destroy network for sandbox \"23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.708966 kubelet[3181]: E0117 00:28:09.708938 3181 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" Jan 17 00:28:09.709105 kubelet[3181]: E0117 00:28:09.709027 3181 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa"} Jan 17 00:28:09.709105 kubelet[3181]: E0117 00:28:09.709057 3181 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"394e468b-e5d2-4096-94d5-a6a60d966235\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:28:09.709105 kubelet[3181]: E0117 00:28:09.709080 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"394e468b-e5d2-4096-94d5-a6a60d966235\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-6bww6" podUID="394e468b-e5d2-4096-94d5-a6a60d966235" Jan 17 00:28:09.711928 containerd[1969]: time="2026-01-17T00:28:09.711890918Z" level=error msg="StopPodSandbox for \"fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60\" failed" error="failed to destroy network for sandbox \"fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.712787 kubelet[3181]: E0117 00:28:09.712186 3181 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" Jan 17 00:28:09.712787 kubelet[3181]: E0117 00:28:09.712277 3181 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60"} Jan 17 00:28:09.712787 kubelet[3181]: E0117 00:28:09.712363 3181 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"67a190ca-72c5-48e2-b272-116175d17788\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:28:09.712787 kubelet[3181]: E0117 00:28:09.712409 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"67a190ca-72c5-48e2-b272-116175d17788\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6m4tc" podUID="67a190ca-72c5-48e2-b272-116175d17788" Jan 17 00:28:09.713586 containerd[1969]: time="2026-01-17T00:28:09.713554335Z" level=error msg="StopPodSandbox for \"132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c\" failed" error="failed to destroy network for sandbox \"132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.713760 kubelet[3181]: E0117 00:28:09.713721 3181 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" Jan 17 00:28:09.713842 kubelet[3181]: E0117 00:28:09.713828 3181 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c"} Jan 17 00:28:09.713923 kubelet[3181]: E0117 00:28:09.713911 3181 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0b97d5ab-19c5-4717-a6ca-1a7a01547f6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:28:09.714014 kubelet[3181]: E0117 00:28:09.714000 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0b97d5ab-19c5-4717-a6ca-1a7a01547f6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6747446b5-k9mxk" podUID="0b97d5ab-19c5-4717-a6ca-1a7a01547f6c" Jan 17 00:28:09.722548 containerd[1969]: time="2026-01-17T00:28:09.722504259Z" level=error msg="StopPodSandbox for \"82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402\" failed" error="failed to destroy network for sandbox \"82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:28:09.723948 kubelet[3181]: E0117 00:28:09.723906 3181 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" Jan 17 00:28:09.724104 kubelet[3181]: E0117 00:28:09.724088 3181 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402"} Jan 17 00:28:09.724190 kubelet[3181]: E0117 00:28:09.724176 3181 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e4d1bdfe-e288-4dee-b980-bbf4550bf441\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:28:09.724306 kubelet[3181]: E0117 00:28:09.724289 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e4d1bdfe-e288-4dee-b980-bbf4550bf441\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-68745d549-cbm5w" podUID="e4d1bdfe-e288-4dee-b980-bbf4550bf441" Jan 17 00:28:14.698215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3721222998.mount: Deactivated successfully. Jan 17 00:28:14.869101 containerd[1969]: time="2026-01-17T00:28:14.868859091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 17 00:28:14.875208 containerd[1969]: time="2026-01-17T00:28:14.875070470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:14.920984 containerd[1969]: time="2026-01-17T00:28:14.920003116Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:14.920984 containerd[1969]: time="2026-01-17T00:28:14.920824212Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.398449041s" Jan 17 00:28:14.920984 containerd[1969]: time="2026-01-17T00:28:14.920867255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 17 00:28:14.936602 containerd[1969]: time="2026-01-17T00:28:14.936553930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:14.967107 containerd[1969]: time="2026-01-17T00:28:14.966961035Z" level=info msg="CreateContainer within sandbox \"1384345fd643d200a8380e1fe6ca6ce0855e25a7460b1cbcf9eb40a442c83fd3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 00:28:15.033013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3142594323.mount: Deactivated successfully. Jan 17 00:28:15.052802 containerd[1969]: time="2026-01-17T00:28:15.052567600Z" level=info msg="CreateContainer within sandbox \"1384345fd643d200a8380e1fe6ca6ce0855e25a7460b1cbcf9eb40a442c83fd3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d51bffd6af8c3c13facd3d1645fc2913bd101dea08cfe01927b46792a278fcd9\"" Jan 17 00:28:15.053561 containerd[1969]: time="2026-01-17T00:28:15.053524774Z" level=info msg="StartContainer for \"d51bffd6af8c3c13facd3d1645fc2913bd101dea08cfe01927b46792a278fcd9\"" Jan 17 00:28:15.223502 systemd[1]: Started cri-containerd-d51bffd6af8c3c13facd3d1645fc2913bd101dea08cfe01927b46792a278fcd9.scope - libcontainer container d51bffd6af8c3c13facd3d1645fc2913bd101dea08cfe01927b46792a278fcd9. Jan 17 00:28:15.283299 containerd[1969]: time="2026-01-17T00:28:15.283250671Z" level=info msg="StartContainer for \"d51bffd6af8c3c13facd3d1645fc2913bd101dea08cfe01927b46792a278fcd9\" returns successfully" Jan 17 00:28:15.412003 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 00:28:15.412690 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 00:28:15.684740 kubelet[3181]: I0117 00:28:15.681202 3181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-khjqh" podStartSLOduration=1.846697234 podStartE2EDuration="18.674428285s" podCreationTimestamp="2026-01-17 00:27:57 +0000 UTC" firstStartedPulling="2026-01-17 00:27:58.109158323 +0000 UTC m=+22.131049165" lastFinishedPulling="2026-01-17 00:28:14.936889371 +0000 UTC m=+38.958780216" observedRunningTime="2026-01-17 00:28:15.673606985 +0000 UTC m=+39.695497849" watchObservedRunningTime="2026-01-17 00:28:15.674428285 +0000 UTC m=+39.696319150" Jan 17 00:28:15.738001 containerd[1969]: time="2026-01-17T00:28:15.736967456Z" level=info msg="StopPodSandbox for \"82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402\"" Jan 17 00:28:16.234802 containerd[1969]: 2026-01-17 00:28:15.878 [INFO][4560] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" Jan 17 00:28:16.234802 containerd[1969]: 2026-01-17 00:28:15.878 [INFO][4560] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" iface="eth0" netns="/var/run/netns/cni-2363c63f-4a33-971b-3579-02ac7290bc9b" Jan 17 00:28:16.234802 containerd[1969]: 2026-01-17 00:28:15.879 [INFO][4560] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" iface="eth0" netns="/var/run/netns/cni-2363c63f-4a33-971b-3579-02ac7290bc9b" Jan 17 00:28:16.234802 containerd[1969]: 2026-01-17 00:28:15.880 [INFO][4560] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" iface="eth0" netns="/var/run/netns/cni-2363c63f-4a33-971b-3579-02ac7290bc9b" Jan 17 00:28:16.234802 containerd[1969]: 2026-01-17 00:28:15.882 [INFO][4560] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" Jan 17 00:28:16.234802 containerd[1969]: 2026-01-17 00:28:15.882 [INFO][4560] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" Jan 17 00:28:16.234802 containerd[1969]: 2026-01-17 00:28:16.205 [INFO][4567] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" HandleID="k8s-pod-network.82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" Workload="ip--172--31--25--116-k8s-whisker--68745d549--cbm5w-eth0" Jan 17 00:28:16.234802 containerd[1969]: 2026-01-17 00:28:16.210 [INFO][4567] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:28:16.234802 containerd[1969]: 2026-01-17 00:28:16.211 [INFO][4567] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:28:16.234802 containerd[1969]: 2026-01-17 00:28:16.226 [WARNING][4567] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" HandleID="k8s-pod-network.82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" Workload="ip--172--31--25--116-k8s-whisker--68745d549--cbm5w-eth0" Jan 17 00:28:16.234802 containerd[1969]: 2026-01-17 00:28:16.226 [INFO][4567] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" HandleID="k8s-pod-network.82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" Workload="ip--172--31--25--116-k8s-whisker--68745d549--cbm5w-eth0" Jan 17 00:28:16.234802 containerd[1969]: 2026-01-17 00:28:16.228 [INFO][4567] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:28:16.234802 containerd[1969]: 2026-01-17 00:28:16.231 [INFO][4560] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" Jan 17 00:28:16.241062 containerd[1969]: time="2026-01-17T00:28:16.237216071Z" level=info msg="TearDown network for sandbox \"82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402\" successfully" Jan 17 00:28:16.241062 containerd[1969]: time="2026-01-17T00:28:16.237279907Z" level=info msg="StopPodSandbox for \"82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402\" returns successfully" Jan 17 00:28:16.239361 systemd[1]: run-netns-cni\x2d2363c63f\x2d4a33\x2d971b\x2d3579\x2d02ac7290bc9b.mount: Deactivated successfully. Jan 17 00:28:16.385588 kubelet[3181]: I0117 00:28:16.385501 3181 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgh44\" (UniqueName: \"kubernetes.io/projected/e4d1bdfe-e288-4dee-b980-bbf4550bf441-kube-api-access-cgh44\") pod \"e4d1bdfe-e288-4dee-b980-bbf4550bf441\" (UID: \"e4d1bdfe-e288-4dee-b980-bbf4550bf441\") " Jan 17 00:28:16.385775 kubelet[3181]: I0117 00:28:16.385710 3181 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e4d1bdfe-e288-4dee-b980-bbf4550bf441-whisker-backend-key-pair\") pod \"e4d1bdfe-e288-4dee-b980-bbf4550bf441\" (UID: \"e4d1bdfe-e288-4dee-b980-bbf4550bf441\") " Jan 17 00:28:16.385775 kubelet[3181]: I0117 00:28:16.385741 3181 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4d1bdfe-e288-4dee-b980-bbf4550bf441-whisker-ca-bundle\") pod \"e4d1bdfe-e288-4dee-b980-bbf4550bf441\" (UID: \"e4d1bdfe-e288-4dee-b980-bbf4550bf441\") " Jan 17 00:28:16.438312 systemd[1]: var-lib-kubelet-pods-e4d1bdfe\x2de288\x2d4dee\x2db980\x2dbbf4550bf441-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 17 00:28:16.445338 kubelet[3181]: I0117 00:28:16.441939 3181 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4d1bdfe-e288-4dee-b980-bbf4550bf441-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e4d1bdfe-e288-4dee-b980-bbf4550bf441" (UID: "e4d1bdfe-e288-4dee-b980-bbf4550bf441"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:28:16.445503 kubelet[3181]: I0117 00:28:16.445441 3181 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4d1bdfe-e288-4dee-b980-bbf4550bf441-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e4d1bdfe-e288-4dee-b980-bbf4550bf441" (UID: "e4d1bdfe-e288-4dee-b980-bbf4550bf441"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:28:16.448396 kubelet[3181]: I0117 00:28:16.448357 3181 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4d1bdfe-e288-4dee-b980-bbf4550bf441-kube-api-access-cgh44" (OuterVolumeSpecName: "kube-api-access-cgh44") pod "e4d1bdfe-e288-4dee-b980-bbf4550bf441" (UID: "e4d1bdfe-e288-4dee-b980-bbf4550bf441"). InnerVolumeSpecName "kube-api-access-cgh44". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:28:16.451053 systemd[1]: var-lib-kubelet-pods-e4d1bdfe\x2de288\x2d4dee\x2db980\x2dbbf4550bf441-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcgh44.mount: Deactivated successfully. Jan 17 00:28:16.487014 kubelet[3181]: I0117 00:28:16.486872 3181 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cgh44\" (UniqueName: \"kubernetes.io/projected/e4d1bdfe-e288-4dee-b980-bbf4550bf441-kube-api-access-cgh44\") on node \"ip-172-31-25-116\" DevicePath \"\"" Jan 17 00:28:16.487014 kubelet[3181]: I0117 00:28:16.486913 3181 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e4d1bdfe-e288-4dee-b980-bbf4550bf441-whisker-backend-key-pair\") on node \"ip-172-31-25-116\" DevicePath \"\"" Jan 17 00:28:16.487014 kubelet[3181]: I0117 00:28:16.486924 3181 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4d1bdfe-e288-4dee-b980-bbf4550bf441-whisker-ca-bundle\") on node \"ip-172-31-25-116\" DevicePath \"\"" Jan 17 00:28:16.595212 kubelet[3181]: I0117 00:28:16.595047 3181 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:28:16.597215 systemd[1]: Removed slice kubepods-besteffort-pode4d1bdfe_e288_4dee_b980_bbf4550bf441.slice - libcontainer container kubepods-besteffort-pode4d1bdfe_e288_4dee_b980_bbf4550bf441.slice. Jan 17 00:28:16.728745 systemd[1]: Created slice kubepods-besteffort-pod76314834_804d_441c_ad9c_ab52475d9d5c.slice - libcontainer container kubepods-besteffort-pod76314834_804d_441c_ad9c_ab52475d9d5c.slice. Jan 17 00:28:16.788798 kubelet[3181]: I0117 00:28:16.788603 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/76314834-804d-441c-ad9c-ab52475d9d5c-whisker-backend-key-pair\") pod \"whisker-6656dcccd5-pnsfx\" (UID: \"76314834-804d-441c-ad9c-ab52475d9d5c\") " pod="calico-system/whisker-6656dcccd5-pnsfx" Jan 17 00:28:16.788798 kubelet[3181]: I0117 00:28:16.788656 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxcqt\" (UniqueName: \"kubernetes.io/projected/76314834-804d-441c-ad9c-ab52475d9d5c-kube-api-access-fxcqt\") pod \"whisker-6656dcccd5-pnsfx\" (UID: \"76314834-804d-441c-ad9c-ab52475d9d5c\") " pod="calico-system/whisker-6656dcccd5-pnsfx" Jan 17 00:28:16.788798 kubelet[3181]: I0117 00:28:16.788677 3181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76314834-804d-441c-ad9c-ab52475d9d5c-whisker-ca-bundle\") pod \"whisker-6656dcccd5-pnsfx\" (UID: \"76314834-804d-441c-ad9c-ab52475d9d5c\") " pod="calico-system/whisker-6656dcccd5-pnsfx" Jan 17 00:28:17.033207 containerd[1969]: time="2026-01-17T00:28:17.032689511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6656dcccd5-pnsfx,Uid:76314834-804d-441c-ad9c-ab52475d9d5c,Namespace:calico-system,Attempt:0,}" Jan 17 00:28:17.394830 (udev-worker)[4537]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:28:17.416415 systemd-networkd[1897]: calid910dbead72: Link UP Jan 17 00:28:17.416727 systemd-networkd[1897]: calid910dbead72: Gained carrier Jan 17 00:28:17.469607 containerd[1969]: 2026-01-17 00:28:17.204 [INFO][4668] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 00:28:17.469607 containerd[1969]: 2026-01-17 00:28:17.233 [INFO][4668] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--116-k8s-whisker--6656dcccd5--pnsfx-eth0 whisker-6656dcccd5- calico-system 76314834-804d-441c-ad9c-ab52475d9d5c 928 0 2026-01-17 00:28:16 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6656dcccd5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-25-116 whisker-6656dcccd5-pnsfx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid910dbead72 [] [] }} ContainerID="a101b226c6f9e4c17ea8f725169ef9325a189e8824a583fdd0b9bb567ef9a03c" Namespace="calico-system" Pod="whisker-6656dcccd5-pnsfx" WorkloadEndpoint="ip--172--31--25--116-k8s-whisker--6656dcccd5--pnsfx-" Jan 17 00:28:17.469607 containerd[1969]: 2026-01-17 00:28:17.233 [INFO][4668] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a101b226c6f9e4c17ea8f725169ef9325a189e8824a583fdd0b9bb567ef9a03c" Namespace="calico-system" Pod="whisker-6656dcccd5-pnsfx" WorkloadEndpoint="ip--172--31--25--116-k8s-whisker--6656dcccd5--pnsfx-eth0" Jan 17 00:28:17.469607 containerd[1969]: 2026-01-17 00:28:17.309 [INFO][4684] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a101b226c6f9e4c17ea8f725169ef9325a189e8824a583fdd0b9bb567ef9a03c" HandleID="k8s-pod-network.a101b226c6f9e4c17ea8f725169ef9325a189e8824a583fdd0b9bb567ef9a03c" Workload="ip--172--31--25--116-k8s-whisker--6656dcccd5--pnsfx-eth0" Jan 17 00:28:17.469607 containerd[1969]: 2026-01-17 00:28:17.309 [INFO][4684] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a101b226c6f9e4c17ea8f725169ef9325a189e8824a583fdd0b9bb567ef9a03c" HandleID="k8s-pod-network.a101b226c6f9e4c17ea8f725169ef9325a189e8824a583fdd0b9bb567ef9a03c" Workload="ip--172--31--25--116-k8s-whisker--6656dcccd5--pnsfx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f680), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-116", "pod":"whisker-6656dcccd5-pnsfx", "timestamp":"2026-01-17 00:28:17.309264356 +0000 UTC"}, Hostname:"ip-172-31-25-116", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:28:17.469607 containerd[1969]: 2026-01-17 00:28:17.309 [INFO][4684] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:28:17.469607 containerd[1969]: 2026-01-17 00:28:17.309 [INFO][4684] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:28:17.469607 containerd[1969]: 2026-01-17 00:28:17.309 [INFO][4684] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-116' Jan 17 00:28:17.469607 containerd[1969]: 2026-01-17 00:28:17.324 [INFO][4684] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a101b226c6f9e4c17ea8f725169ef9325a189e8824a583fdd0b9bb567ef9a03c" host="ip-172-31-25-116" Jan 17 00:28:17.469607 containerd[1969]: 2026-01-17 00:28:17.338 [INFO][4684] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-116" Jan 17 00:28:17.469607 containerd[1969]: 2026-01-17 00:28:17.346 [INFO][4684] ipam/ipam.go 511: Trying affinity for 192.168.47.128/26 host="ip-172-31-25-116" Jan 17 00:28:17.469607 containerd[1969]: 2026-01-17 00:28:17.349 [INFO][4684] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.128/26 host="ip-172-31-25-116" Jan 17 00:28:17.469607 containerd[1969]: 2026-01-17 00:28:17.352 [INFO][4684] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ip-172-31-25-116" Jan 17 00:28:17.469607 containerd[1969]: 2026-01-17 00:28:17.352 [INFO][4684] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.a101b226c6f9e4c17ea8f725169ef9325a189e8824a583fdd0b9bb567ef9a03c" host="ip-172-31-25-116" Jan 17 00:28:17.469607 containerd[1969]: 2026-01-17 00:28:17.355 [INFO][4684] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a101b226c6f9e4c17ea8f725169ef9325a189e8824a583fdd0b9bb567ef9a03c Jan 17 00:28:17.469607 containerd[1969]: 2026-01-17 00:28:17.362 [INFO][4684] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.a101b226c6f9e4c17ea8f725169ef9325a189e8824a583fdd0b9bb567ef9a03c" host="ip-172-31-25-116" Jan 17 00:28:17.469607 containerd[1969]: 2026-01-17 00:28:17.374 [INFO][4684] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.47.129/26] block=192.168.47.128/26 handle="k8s-pod-network.a101b226c6f9e4c17ea8f725169ef9325a189e8824a583fdd0b9bb567ef9a03c" host="ip-172-31-25-116" Jan 17 00:28:17.469607 containerd[1969]: 2026-01-17 00:28:17.374 [INFO][4684] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.129/26] handle="k8s-pod-network.a101b226c6f9e4c17ea8f725169ef9325a189e8824a583fdd0b9bb567ef9a03c" host="ip-172-31-25-116" Jan 17 00:28:17.469607 containerd[1969]: 2026-01-17 00:28:17.374 [INFO][4684] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:28:17.469607 containerd[1969]: 2026-01-17 00:28:17.374 [INFO][4684] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.47.129/26] IPv6=[] ContainerID="a101b226c6f9e4c17ea8f725169ef9325a189e8824a583fdd0b9bb567ef9a03c" HandleID="k8s-pod-network.a101b226c6f9e4c17ea8f725169ef9325a189e8824a583fdd0b9bb567ef9a03c" Workload="ip--172--31--25--116-k8s-whisker--6656dcccd5--pnsfx-eth0" Jan 17 00:28:17.471385 containerd[1969]: 2026-01-17 00:28:17.380 [INFO][4668] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a101b226c6f9e4c17ea8f725169ef9325a189e8824a583fdd0b9bb567ef9a03c" Namespace="calico-system" Pod="whisker-6656dcccd5-pnsfx" WorkloadEndpoint="ip--172--31--25--116-k8s-whisker--6656dcccd5--pnsfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--116-k8s-whisker--6656dcccd5--pnsfx-eth0", GenerateName:"whisker-6656dcccd5-", Namespace:"calico-system", SelfLink:"", UID:"76314834-804d-441c-ad9c-ab52475d9d5c", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 28, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6656dcccd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-116", ContainerID:"", Pod:"whisker-6656dcccd5-pnsfx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.47.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid910dbead72", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:28:17.471385 containerd[1969]: 2026-01-17 00:28:17.380 [INFO][4668] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.129/32] ContainerID="a101b226c6f9e4c17ea8f725169ef9325a189e8824a583fdd0b9bb567ef9a03c" Namespace="calico-system" Pod="whisker-6656dcccd5-pnsfx" WorkloadEndpoint="ip--172--31--25--116-k8s-whisker--6656dcccd5--pnsfx-eth0" Jan 17 00:28:17.471385 containerd[1969]: 2026-01-17 00:28:17.380 [INFO][4668] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid910dbead72 ContainerID="a101b226c6f9e4c17ea8f725169ef9325a189e8824a583fdd0b9bb567ef9a03c" Namespace="calico-system" Pod="whisker-6656dcccd5-pnsfx" WorkloadEndpoint="ip--172--31--25--116-k8s-whisker--6656dcccd5--pnsfx-eth0" Jan 17 00:28:17.471385 containerd[1969]: 2026-01-17 00:28:17.419 [INFO][4668] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a101b226c6f9e4c17ea8f725169ef9325a189e8824a583fdd0b9bb567ef9a03c" Namespace="calico-system" Pod="whisker-6656dcccd5-pnsfx" WorkloadEndpoint="ip--172--31--25--116-k8s-whisker--6656dcccd5--pnsfx-eth0" Jan 17 00:28:17.471385 containerd[1969]: 2026-01-17 00:28:17.424 [INFO][4668] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a101b226c6f9e4c17ea8f725169ef9325a189e8824a583fdd0b9bb567ef9a03c" Namespace="calico-system" Pod="whisker-6656dcccd5-pnsfx" WorkloadEndpoint="ip--172--31--25--116-k8s-whisker--6656dcccd5--pnsfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--116-k8s-whisker--6656dcccd5--pnsfx-eth0", GenerateName:"whisker-6656dcccd5-", Namespace:"calico-system", SelfLink:"", UID:"76314834-804d-441c-ad9c-ab52475d9d5c", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 28, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6656dcccd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-116", ContainerID:"a101b226c6f9e4c17ea8f725169ef9325a189e8824a583fdd0b9bb567ef9a03c", Pod:"whisker-6656dcccd5-pnsfx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.47.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid910dbead72", MAC:"ca:0a:af:61:7d:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:28:17.471385 containerd[1969]: 2026-01-17 00:28:17.457 [INFO][4668] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a101b226c6f9e4c17ea8f725169ef9325a189e8824a583fdd0b9bb567ef9a03c" Namespace="calico-system" Pod="whisker-6656dcccd5-pnsfx" WorkloadEndpoint="ip--172--31--25--116-k8s-whisker--6656dcccd5--pnsfx-eth0" Jan 17 00:28:17.553239 containerd[1969]: time="2026-01-17T00:28:17.550610322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:28:17.553239 containerd[1969]: time="2026-01-17T00:28:17.552794400Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:28:17.553239 containerd[1969]: time="2026-01-17T00:28:17.552831273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:17.553239 containerd[1969]: time="2026-01-17T00:28:17.552983829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:17.593996 systemd[1]: Started cri-containerd-a101b226c6f9e4c17ea8f725169ef9325a189e8824a583fdd0b9bb567ef9a03c.scope - libcontainer container a101b226c6f9e4c17ea8f725169ef9325a189e8824a583fdd0b9bb567ef9a03c. Jan 17 00:28:17.735300 containerd[1969]: time="2026-01-17T00:28:17.735116483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6656dcccd5-pnsfx,Uid:76314834-804d-441c-ad9c-ab52475d9d5c,Namespace:calico-system,Attempt:0,} returns sandbox id \"a101b226c6f9e4c17ea8f725169ef9325a189e8824a583fdd0b9bb567ef9a03c\"" Jan 17 00:28:17.738682 containerd[1969]: time="2026-01-17T00:28:17.738428487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:28:18.041857 containerd[1969]: time="2026-01-17T00:28:18.041587640Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:28:18.052091 containerd[1969]: time="2026-01-17T00:28:18.043866407Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:28:18.052256 containerd[1969]: time="2026-01-17T00:28:18.043978194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:28:18.052400 kubelet[3181]: E0117 00:28:18.052348 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:28:18.052852 kubelet[3181]: E0117 00:28:18.052412 3181 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:28:18.074491 kubelet[3181]: E0117 00:28:18.074420 3181 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:04b6a3f3f16b4f078d52b8b865750bfc,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fxcqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6656dcccd5-pnsfx_calico-system(76314834-804d-441c-ad9c-ab52475d9d5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:28:18.076618 containerd[1969]: time="2026-01-17T00:28:18.076424769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:28:18.276019 kubelet[3181]: I0117 00:28:18.275960 3181 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4d1bdfe-e288-4dee-b980-bbf4550bf441" path="/var/lib/kubelet/pods/e4d1bdfe-e288-4dee-b980-bbf4550bf441/volumes" Jan 17 00:28:18.354997 kubelet[3181]: I0117 00:28:18.354674 3181 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:28:18.381123 containerd[1969]: time="2026-01-17T00:28:18.381061159Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:28:18.383290 containerd[1969]: time="2026-01-17T00:28:18.383228337Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:28:18.383290 containerd[1969]: time="2026-01-17T00:28:18.383241062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:28:18.383783 kubelet[3181]: E0117 00:28:18.383639 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:28:18.383783 kubelet[3181]: E0117 00:28:18.383683 3181 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:28:18.383928 kubelet[3181]: E0117 00:28:18.383833 3181 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fxcqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6656dcccd5-pnsfx_calico-system(76314834-804d-441c-ad9c-ab52475d9d5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:28:18.385414 kubelet[3181]: E0117 00:28:18.385370 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6656dcccd5-pnsfx" podUID="76314834-804d-441c-ad9c-ab52475d9d5c" Jan 17 00:28:18.641645 kubelet[3181]: E0117 00:28:18.641338 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6656dcccd5-pnsfx" podUID="76314834-804d-441c-ad9c-ab52475d9d5c" Jan 17 00:28:19.032446 systemd-networkd[1897]: calid910dbead72: Gained IPv6LL Jan 17 00:28:19.130856 kernel: bpftool[4811]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 00:28:19.387172 systemd-networkd[1897]: vxlan.calico: Link UP Jan 17 00:28:19.387185 systemd-networkd[1897]: vxlan.calico: Gained carrier Jan 17 00:28:19.412874 (udev-worker)[4536]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:28:19.629569 kubelet[3181]: E0117 00:28:19.629459 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6656dcccd5-pnsfx" podUID="76314834-804d-441c-ad9c-ab52475d9d5c" Jan 17 00:28:20.269267 containerd[1969]: time="2026-01-17T00:28:20.268160008Z" level=info msg="StopPodSandbox for \"6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831\"" Jan 17 00:28:20.362607 containerd[1969]: 2026-01-17 00:28:20.316 [INFO][4890] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" Jan 17 00:28:20.362607 containerd[1969]: 2026-01-17 00:28:20.317 [INFO][4890] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" iface="eth0" netns="/var/run/netns/cni-b2956d12-1226-6cc8-50ed-d1a3f18b40c6" Jan 17 00:28:20.362607 containerd[1969]: 2026-01-17 00:28:20.317 [INFO][4890] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" iface="eth0" netns="/var/run/netns/cni-b2956d12-1226-6cc8-50ed-d1a3f18b40c6" Jan 17 00:28:20.362607 containerd[1969]: 2026-01-17 00:28:20.318 [INFO][4890] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" iface="eth0" netns="/var/run/netns/cni-b2956d12-1226-6cc8-50ed-d1a3f18b40c6" Jan 17 00:28:20.362607 containerd[1969]: 2026-01-17 00:28:20.318 [INFO][4890] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" Jan 17 00:28:20.362607 containerd[1969]: 2026-01-17 00:28:20.318 [INFO][4890] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" Jan 17 00:28:20.362607 containerd[1969]: 2026-01-17 00:28:20.346 [INFO][4897] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" HandleID="k8s-pod-network.6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" Workload="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--7hcx6-eth0" Jan 17 00:28:20.362607 containerd[1969]: 2026-01-17 00:28:20.346 [INFO][4897] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:28:20.362607 containerd[1969]: 2026-01-17 00:28:20.346 [INFO][4897] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:28:20.362607 containerd[1969]: 2026-01-17 00:28:20.355 [WARNING][4897] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" HandleID="k8s-pod-network.6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" Workload="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--7hcx6-eth0" Jan 17 00:28:20.362607 containerd[1969]: 2026-01-17 00:28:20.355 [INFO][4897] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" HandleID="k8s-pod-network.6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" Workload="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--7hcx6-eth0" Jan 17 00:28:20.362607 containerd[1969]: 2026-01-17 00:28:20.357 [INFO][4897] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:28:20.362607 containerd[1969]: 2026-01-17 00:28:20.359 [INFO][4890] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" Jan 17 00:28:20.365986 containerd[1969]: time="2026-01-17T00:28:20.363646561Z" level=info msg="TearDown network for sandbox \"6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831\" successfully" Jan 17 00:28:20.365986 containerd[1969]: time="2026-01-17T00:28:20.363676297Z" level=info msg="StopPodSandbox for \"6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831\" returns successfully" Jan 17 00:28:20.366747 containerd[1969]: time="2026-01-17T00:28:20.366699650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6747446b5-7hcx6,Uid:cc407ca1-a787-4c80-b23e-a6c88347fad4,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:28:20.367359 systemd[1]: run-netns-cni\x2db2956d12\x2d1226\x2d6cc8\x2d50ed\x2dd1a3f18b40c6.mount: Deactivated successfully. Jan 17 00:28:20.509212 systemd-networkd[1897]: cali4302629f2a8: Link UP Jan 17 00:28:20.511401 systemd-networkd[1897]: cali4302629f2a8: Gained carrier Jan 17 00:28:20.531819 containerd[1969]: 2026-01-17 00:28:20.425 [INFO][4904] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--116-k8s-calico--apiserver--6747446b5--7hcx6-eth0 calico-apiserver-6747446b5- calico-apiserver cc407ca1-a787-4c80-b23e-a6c88347fad4 965 0 2026-01-17 00:27:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6747446b5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-25-116 calico-apiserver-6747446b5-7hcx6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4302629f2a8 [] [] }} ContainerID="de48a006c17f7c4f085acc35dfafa06532bff9c85f0a2e8a6bd2ff73ee2126a1" Namespace="calico-apiserver" Pod="calico-apiserver-6747446b5-7hcx6" WorkloadEndpoint="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--7hcx6-" Jan 17 00:28:20.531819 containerd[1969]: 2026-01-17 00:28:20.425 [INFO][4904] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="de48a006c17f7c4f085acc35dfafa06532bff9c85f0a2e8a6bd2ff73ee2126a1" Namespace="calico-apiserver" Pod="calico-apiserver-6747446b5-7hcx6" WorkloadEndpoint="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--7hcx6-eth0" Jan 17 00:28:20.531819 containerd[1969]: 2026-01-17 00:28:20.458 [INFO][4916] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de48a006c17f7c4f085acc35dfafa06532bff9c85f0a2e8a6bd2ff73ee2126a1" HandleID="k8s-pod-network.de48a006c17f7c4f085acc35dfafa06532bff9c85f0a2e8a6bd2ff73ee2126a1" Workload="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--7hcx6-eth0" Jan 17 00:28:20.531819 containerd[1969]: 2026-01-17 00:28:20.458 [INFO][4916] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="de48a006c17f7c4f085acc35dfafa06532bff9c85f0a2e8a6bd2ff73ee2126a1" HandleID="k8s-pod-network.de48a006c17f7c4f085acc35dfafa06532bff9c85f0a2e8a6bd2ff73ee2126a1" Workload="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--7hcx6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-25-116", "pod":"calico-apiserver-6747446b5-7hcx6", "timestamp":"2026-01-17 00:28:20.458209852 +0000 UTC"}, Hostname:"ip-172-31-25-116", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:28:20.531819 containerd[1969]: 2026-01-17 00:28:20.458 [INFO][4916] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:28:20.531819 containerd[1969]: 2026-01-17 00:28:20.458 [INFO][4916] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:28:20.531819 containerd[1969]: 2026-01-17 00:28:20.458 [INFO][4916] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-116' Jan 17 00:28:20.531819 containerd[1969]: 2026-01-17 00:28:20.466 [INFO][4916] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.de48a006c17f7c4f085acc35dfafa06532bff9c85f0a2e8a6bd2ff73ee2126a1" host="ip-172-31-25-116" Jan 17 00:28:20.531819 containerd[1969]: 2026-01-17 00:28:20.473 [INFO][4916] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-116" Jan 17 00:28:20.531819 containerd[1969]: 2026-01-17 00:28:20.479 [INFO][4916] ipam/ipam.go 511: Trying affinity for 192.168.47.128/26 host="ip-172-31-25-116" Jan 17 00:28:20.531819 containerd[1969]: 2026-01-17 00:28:20.481 [INFO][4916] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.128/26 host="ip-172-31-25-116" Jan 17 00:28:20.531819 containerd[1969]: 2026-01-17 00:28:20.484 [INFO][4916] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ip-172-31-25-116" Jan 17 00:28:20.531819 containerd[1969]: 2026-01-17 00:28:20.484 [INFO][4916] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.de48a006c17f7c4f085acc35dfafa06532bff9c85f0a2e8a6bd2ff73ee2126a1" host="ip-172-31-25-116" Jan 17 00:28:20.531819 containerd[1969]: 2026-01-17 00:28:20.485 [INFO][4916] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.de48a006c17f7c4f085acc35dfafa06532bff9c85f0a2e8a6bd2ff73ee2126a1 Jan 17 00:28:20.531819 containerd[1969]: 2026-01-17 00:28:20.493 [INFO][4916] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.de48a006c17f7c4f085acc35dfafa06532bff9c85f0a2e8a6bd2ff73ee2126a1" host="ip-172-31-25-116" Jan 17 00:28:20.531819 containerd[1969]: 2026-01-17 00:28:20.502 [INFO][4916] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.47.130/26] block=192.168.47.128/26 handle="k8s-pod-network.de48a006c17f7c4f085acc35dfafa06532bff9c85f0a2e8a6bd2ff73ee2126a1" host="ip-172-31-25-116" Jan 17 00:28:20.531819 containerd[1969]: 2026-01-17 00:28:20.502 [INFO][4916] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.130/26] handle="k8s-pod-network.de48a006c17f7c4f085acc35dfafa06532bff9c85f0a2e8a6bd2ff73ee2126a1" host="ip-172-31-25-116" Jan 17 00:28:20.531819 containerd[1969]: 2026-01-17 00:28:20.503 [INFO][4916] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:28:20.531819 containerd[1969]: 2026-01-17 00:28:20.503 [INFO][4916] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.47.130/26] IPv6=[] ContainerID="de48a006c17f7c4f085acc35dfafa06532bff9c85f0a2e8a6bd2ff73ee2126a1" HandleID="k8s-pod-network.de48a006c17f7c4f085acc35dfafa06532bff9c85f0a2e8a6bd2ff73ee2126a1" Workload="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--7hcx6-eth0" Jan 17 00:28:20.534560 containerd[1969]: 2026-01-17 00:28:20.505 [INFO][4904] cni-plugin/k8s.go 418: Populated endpoint ContainerID="de48a006c17f7c4f085acc35dfafa06532bff9c85f0a2e8a6bd2ff73ee2126a1" Namespace="calico-apiserver" Pod="calico-apiserver-6747446b5-7hcx6" WorkloadEndpoint="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--7hcx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--116-k8s-calico--apiserver--6747446b5--7hcx6-eth0", GenerateName:"calico-apiserver-6747446b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"cc407ca1-a787-4c80-b23e-a6c88347fad4", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 27, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6747446b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-116", ContainerID:"", Pod:"calico-apiserver-6747446b5-7hcx6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4302629f2a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:28:20.534560 containerd[1969]: 2026-01-17 00:28:20.505 [INFO][4904] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.130/32] ContainerID="de48a006c17f7c4f085acc35dfafa06532bff9c85f0a2e8a6bd2ff73ee2126a1" Namespace="calico-apiserver" Pod="calico-apiserver-6747446b5-7hcx6" WorkloadEndpoint="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--7hcx6-eth0" Jan 17 00:28:20.534560 containerd[1969]: 2026-01-17 00:28:20.505 [INFO][4904] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4302629f2a8 ContainerID="de48a006c17f7c4f085acc35dfafa06532bff9c85f0a2e8a6bd2ff73ee2126a1" Namespace="calico-apiserver" Pod="calico-apiserver-6747446b5-7hcx6" WorkloadEndpoint="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--7hcx6-eth0" Jan 17 00:28:20.534560 containerd[1969]: 2026-01-17 00:28:20.510 [INFO][4904] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de48a006c17f7c4f085acc35dfafa06532bff9c85f0a2e8a6bd2ff73ee2126a1" Namespace="calico-apiserver" Pod="calico-apiserver-6747446b5-7hcx6" WorkloadEndpoint="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--7hcx6-eth0" Jan 17 00:28:20.534560 containerd[1969]: 2026-01-17 00:28:20.512 [INFO][4904] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="de48a006c17f7c4f085acc35dfafa06532bff9c85f0a2e8a6bd2ff73ee2126a1" Namespace="calico-apiserver" Pod="calico-apiserver-6747446b5-7hcx6" WorkloadEndpoint="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--7hcx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--116-k8s-calico--apiserver--6747446b5--7hcx6-eth0", GenerateName:"calico-apiserver-6747446b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"cc407ca1-a787-4c80-b23e-a6c88347fad4", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 27, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6747446b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-116", ContainerID:"de48a006c17f7c4f085acc35dfafa06532bff9c85f0a2e8a6bd2ff73ee2126a1", Pod:"calico-apiserver-6747446b5-7hcx6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4302629f2a8", MAC:"5e:72:1b:67:fe:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:28:20.534560 containerd[1969]: 2026-01-17 00:28:20.528 [INFO][4904] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="de48a006c17f7c4f085acc35dfafa06532bff9c85f0a2e8a6bd2ff73ee2126a1" Namespace="calico-apiserver" Pod="calico-apiserver-6747446b5-7hcx6" WorkloadEndpoint="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--7hcx6-eth0" Jan 17 00:28:20.564706 containerd[1969]: time="2026-01-17T00:28:20.564399643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:28:20.564706 containerd[1969]: time="2026-01-17T00:28:20.564485872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:28:20.564706 containerd[1969]: time="2026-01-17T00:28:20.564509844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:20.566034 containerd[1969]: time="2026-01-17T00:28:20.565899724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:20.596974 systemd[1]: Started cri-containerd-de48a006c17f7c4f085acc35dfafa06532bff9c85f0a2e8a6bd2ff73ee2126a1.scope - libcontainer container de48a006c17f7c4f085acc35dfafa06532bff9c85f0a2e8a6bd2ff73ee2126a1. Jan 17 00:28:20.676605 containerd[1969]: time="2026-01-17T00:28:20.676491869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6747446b5-7hcx6,Uid:cc407ca1-a787-4c80-b23e-a6c88347fad4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"de48a006c17f7c4f085acc35dfafa06532bff9c85f0a2e8a6bd2ff73ee2126a1\"" Jan 17 00:28:20.678638 containerd[1969]: time="2026-01-17T00:28:20.678598740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:28:20.884786 systemd-networkd[1897]: vxlan.calico: Gained IPv6LL Jan 17 00:28:20.975024 containerd[1969]: time="2026-01-17T00:28:20.974962872Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:28:20.977297 containerd[1969]: time="2026-01-17T00:28:20.977248171Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:28:20.977398 containerd[1969]: time="2026-01-17T00:28:20.977344088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:28:20.978200 kubelet[3181]: E0117 00:28:20.978157 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:28:20.978509 kubelet[3181]: E0117 00:28:20.978211 3181 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:28:20.978509 kubelet[3181]: E0117 00:28:20.978341 3181 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xlgp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6747446b5-7hcx6_calico-apiserver(cc407ca1-a787-4c80-b23e-a6c88347fad4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:28:20.979866 kubelet[3181]: E0117 00:28:20.979715 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6747446b5-7hcx6" podUID="cc407ca1-a787-4c80-b23e-a6c88347fad4" Jan 17 00:28:21.270785 containerd[1969]: time="2026-01-17T00:28:21.270164629Z" level=info msg="StopPodSandbox for \"23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa\"" Jan 17 00:28:21.272181 containerd[1969]: time="2026-01-17T00:28:21.271906135Z" level=info msg="StopPodSandbox for \"fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60\"" Jan 17 00:28:21.274101 containerd[1969]: time="2026-01-17T00:28:21.273809715Z" level=info msg="StopPodSandbox for \"a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800\"" Jan 17 00:28:21.407794 containerd[1969]: 2026-01-17 00:28:21.362 [INFO][4993] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" Jan 17 00:28:21.407794 containerd[1969]: 2026-01-17 00:28:21.364 [INFO][4993] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" iface="eth0" netns="/var/run/netns/cni-af7ef176-d463-2ffe-372c-54508bd27604" Jan 17 00:28:21.407794 containerd[1969]: 2026-01-17 00:28:21.365 [INFO][4993] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" iface="eth0" netns="/var/run/netns/cni-af7ef176-d463-2ffe-372c-54508bd27604" Jan 17 00:28:21.407794 containerd[1969]: 2026-01-17 00:28:21.367 [INFO][4993] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" iface="eth0" netns="/var/run/netns/cni-af7ef176-d463-2ffe-372c-54508bd27604" Jan 17 00:28:21.407794 containerd[1969]: 2026-01-17 00:28:21.367 [INFO][4993] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" Jan 17 00:28:21.407794 containerd[1969]: 2026-01-17 00:28:21.367 [INFO][4993] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" Jan 17 00:28:21.407794 containerd[1969]: 2026-01-17 00:28:21.394 [INFO][5021] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" HandleID="k8s-pod-network.23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" Workload="ip--172--31--25--116-k8s-goldmane--666569f655--6bww6-eth0" Jan 17 00:28:21.407794 containerd[1969]: 2026-01-17 00:28:21.394 [INFO][5021] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:28:21.407794 containerd[1969]: 2026-01-17 00:28:21.394 [INFO][5021] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:28:21.407794 containerd[1969]: 2026-01-17 00:28:21.401 [WARNING][5021] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" HandleID="k8s-pod-network.23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" Workload="ip--172--31--25--116-k8s-goldmane--666569f655--6bww6-eth0" Jan 17 00:28:21.407794 containerd[1969]: 2026-01-17 00:28:21.401 [INFO][5021] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" HandleID="k8s-pod-network.23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" Workload="ip--172--31--25--116-k8s-goldmane--666569f655--6bww6-eth0" Jan 17 00:28:21.407794 containerd[1969]: 2026-01-17 00:28:21.403 [INFO][5021] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:28:21.407794 containerd[1969]: 2026-01-17 00:28:21.405 [INFO][4993] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" Jan 17 00:28:21.411126 containerd[1969]: time="2026-01-17T00:28:21.410866813Z" level=info msg="TearDown network for sandbox \"23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa\" successfully" Jan 17 00:28:21.411126 containerd[1969]: time="2026-01-17T00:28:21.410900890Z" level=info msg="StopPodSandbox for \"23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa\" returns successfully" Jan 17 00:28:21.413731 systemd[1]: run-netns-cni\x2daf7ef176\x2dd463\x2d2ffe\x2d372c\x2d54508bd27604.mount: Deactivated successfully. Jan 17 00:28:21.416503 containerd[1969]: time="2026-01-17T00:28:21.416333407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-6bww6,Uid:394e468b-e5d2-4096-94d5-a6a60d966235,Namespace:calico-system,Attempt:1,}" Jan 17 00:28:21.430923 containerd[1969]: 2026-01-17 00:28:21.348 [INFO][4998] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" Jan 17 00:28:21.430923 containerd[1969]: 2026-01-17 00:28:21.349 [INFO][4998] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" iface="eth0" netns="/var/run/netns/cni-925c53cd-723e-1bf9-9afc-7505d63dbede" Jan 17 00:28:21.430923 containerd[1969]: 2026-01-17 00:28:21.349 [INFO][4998] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" iface="eth0" netns="/var/run/netns/cni-925c53cd-723e-1bf9-9afc-7505d63dbede" Jan 17 00:28:21.430923 containerd[1969]: 2026-01-17 00:28:21.349 [INFO][4998] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" iface="eth0" netns="/var/run/netns/cni-925c53cd-723e-1bf9-9afc-7505d63dbede" Jan 17 00:28:21.430923 containerd[1969]: 2026-01-17 00:28:21.350 [INFO][4998] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" Jan 17 00:28:21.430923 containerd[1969]: 2026-01-17 00:28:21.350 [INFO][4998] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" Jan 17 00:28:21.430923 containerd[1969]: 2026-01-17 00:28:21.411 [INFO][5016] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" HandleID="k8s-pod-network.fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" Workload="ip--172--31--25--116-k8s-coredns--674b8bbfcf--6m4tc-eth0" Jan 17 00:28:21.430923 containerd[1969]: 2026-01-17 00:28:21.411 [INFO][5016] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:28:21.430923 containerd[1969]: 2026-01-17 00:28:21.412 [INFO][5016] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:28:21.430923 containerd[1969]: 2026-01-17 00:28:21.423 [WARNING][5016] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" HandleID="k8s-pod-network.fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" Workload="ip--172--31--25--116-k8s-coredns--674b8bbfcf--6m4tc-eth0" Jan 17 00:28:21.430923 containerd[1969]: 2026-01-17 00:28:21.423 [INFO][5016] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" HandleID="k8s-pod-network.fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" Workload="ip--172--31--25--116-k8s-coredns--674b8bbfcf--6m4tc-eth0" Jan 17 00:28:21.430923 containerd[1969]: 2026-01-17 00:28:21.425 [INFO][5016] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:28:21.430923 containerd[1969]: 2026-01-17 00:28:21.427 [INFO][4998] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" Jan 17 00:28:21.433452 containerd[1969]: time="2026-01-17T00:28:21.431198257Z" level=info msg="TearDown network for sandbox \"fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60\" successfully" Jan 17 00:28:21.433452 containerd[1969]: time="2026-01-17T00:28:21.431243264Z" level=info msg="StopPodSandbox for \"fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60\" returns successfully" Jan 17 00:28:21.433452 containerd[1969]: time="2026-01-17T00:28:21.431972347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6m4tc,Uid:67a190ca-72c5-48e2-b272-116175d17788,Namespace:kube-system,Attempt:1,}" Jan 17 00:28:21.436630 systemd[1]: run-netns-cni\x2d925c53cd\x2d723e\x2d1bf9\x2d9afc\x2d7505d63dbede.mount: Deactivated successfully. Jan 17 00:28:21.446781 containerd[1969]: 2026-01-17 00:28:21.366 [INFO][5001] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" Jan 17 00:28:21.446781 containerd[1969]: 2026-01-17 00:28:21.366 [INFO][5001] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" iface="eth0" netns="/var/run/netns/cni-5aaaac2b-e434-574d-9696-4b0e4035bbef" Jan 17 00:28:21.446781 containerd[1969]: 2026-01-17 00:28:21.367 [INFO][5001] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" iface="eth0" netns="/var/run/netns/cni-5aaaac2b-e434-574d-9696-4b0e4035bbef" Jan 17 00:28:21.446781 containerd[1969]: 2026-01-17 00:28:21.368 [INFO][5001] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" iface="eth0" netns="/var/run/netns/cni-5aaaac2b-e434-574d-9696-4b0e4035bbef" Jan 17 00:28:21.446781 containerd[1969]: 2026-01-17 00:28:21.368 [INFO][5001] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" Jan 17 00:28:21.446781 containerd[1969]: 2026-01-17 00:28:21.368 [INFO][5001] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" Jan 17 00:28:21.446781 containerd[1969]: 2026-01-17 00:28:21.424 [INFO][5027] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" HandleID="k8s-pod-network.a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" Workload="ip--172--31--25--116-k8s-calico--kube--controllers--877bf5958--fmwqm-eth0" Jan 17 00:28:21.446781 containerd[1969]: 2026-01-17 00:28:21.425 [INFO][5027] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:28:21.446781 containerd[1969]: 2026-01-17 00:28:21.425 [INFO][5027] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:28:21.446781 containerd[1969]: 2026-01-17 00:28:21.439 [WARNING][5027] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" HandleID="k8s-pod-network.a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" Workload="ip--172--31--25--116-k8s-calico--kube--controllers--877bf5958--fmwqm-eth0" Jan 17 00:28:21.446781 containerd[1969]: 2026-01-17 00:28:21.439 [INFO][5027] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" HandleID="k8s-pod-network.a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" Workload="ip--172--31--25--116-k8s-calico--kube--controllers--877bf5958--fmwqm-eth0" Jan 17 00:28:21.446781 containerd[1969]: 2026-01-17 00:28:21.441 [INFO][5027] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:28:21.446781 containerd[1969]: 2026-01-17 00:28:21.443 [INFO][5001] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" Jan 17 00:28:21.446781 containerd[1969]: time="2026-01-17T00:28:21.446026686Z" level=info msg="TearDown network for sandbox \"a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800\" successfully" Jan 17 00:28:21.446781 containerd[1969]: time="2026-01-17T00:28:21.446050398Z" level=info msg="StopPodSandbox for \"a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800\" returns successfully" Jan 17 00:28:21.446781 containerd[1969]: time="2026-01-17T00:28:21.446649644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-877bf5958-fmwqm,Uid:5fd74b61-87d1-45e4-b949-57645e5eb510,Namespace:calico-system,Attempt:1,}" Jan 17 00:28:21.452401 systemd[1]: run-netns-cni\x2d5aaaac2b\x2de434\x2d574d\x2d9696\x2d4b0e4035bbef.mount: Deactivated successfully. Jan 17 00:28:21.591437 systemd-networkd[1897]: cali4302629f2a8: Gained IPv6LL Jan 17 00:28:21.649907 kubelet[3181]: E0117 00:28:21.649854 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6747446b5-7hcx6" podUID="cc407ca1-a787-4c80-b23e-a6c88347fad4" Jan 17 00:28:21.715278 systemd-networkd[1897]: cali9f3dfb8abd9: Link UP Jan 17 00:28:21.715609 systemd-networkd[1897]: cali9f3dfb8abd9: Gained carrier Jan 17 00:28:21.743317 containerd[1969]: 2026-01-17 00:28:21.553 [INFO][5039] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--116-k8s-goldmane--666569f655--6bww6-eth0 goldmane-666569f655- calico-system 394e468b-e5d2-4096-94d5-a6a60d966235 979 0 2026-01-17 00:27:55 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-25-116 goldmane-666569f655-6bww6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9f3dfb8abd9 [] [] }} ContainerID="7cdf1caebcccbcc1b953ba7c63251016ca75f9f14091d04597f96c56826c3b23" Namespace="calico-system" Pod="goldmane-666569f655-6bww6" WorkloadEndpoint="ip--172--31--25--116-k8s-goldmane--666569f655--6bww6-" Jan 17 00:28:21.743317 containerd[1969]: 2026-01-17 00:28:21.554 [INFO][5039] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7cdf1caebcccbcc1b953ba7c63251016ca75f9f14091d04597f96c56826c3b23" Namespace="calico-system" Pod="goldmane-666569f655-6bww6" WorkloadEndpoint="ip--172--31--25--116-k8s-goldmane--666569f655--6bww6-eth0" Jan 17 00:28:21.743317 containerd[1969]: 2026-01-17 00:28:21.620 [INFO][5076] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7cdf1caebcccbcc1b953ba7c63251016ca75f9f14091d04597f96c56826c3b23" HandleID="k8s-pod-network.7cdf1caebcccbcc1b953ba7c63251016ca75f9f14091d04597f96c56826c3b23" Workload="ip--172--31--25--116-k8s-goldmane--666569f655--6bww6-eth0" Jan 17 00:28:21.743317 containerd[1969]: 2026-01-17 00:28:21.620 [INFO][5076] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7cdf1caebcccbcc1b953ba7c63251016ca75f9f14091d04597f96c56826c3b23" HandleID="k8s-pod-network.7cdf1caebcccbcc1b953ba7c63251016ca75f9f14091d04597f96c56826c3b23" Workload="ip--172--31--25--116-k8s-goldmane--666569f655--6bww6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d57e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-116", "pod":"goldmane-666569f655-6bww6", "timestamp":"2026-01-17 00:28:21.620608208 +0000 UTC"}, Hostname:"ip-172-31-25-116", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:28:21.743317 containerd[1969]: 2026-01-17 00:28:21.620 [INFO][5076] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:28:21.743317 containerd[1969]: 2026-01-17 00:28:21.620 [INFO][5076] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:28:21.743317 containerd[1969]: 2026-01-17 00:28:21.621 [INFO][5076] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-116' Jan 17 00:28:21.743317 containerd[1969]: 2026-01-17 00:28:21.630 [INFO][5076] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7cdf1caebcccbcc1b953ba7c63251016ca75f9f14091d04597f96c56826c3b23" host="ip-172-31-25-116" Jan 17 00:28:21.743317 containerd[1969]: 2026-01-17 00:28:21.638 [INFO][5076] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-116" Jan 17 00:28:21.743317 containerd[1969]: 2026-01-17 00:28:21.649 [INFO][5076] ipam/ipam.go 511: Trying affinity for 192.168.47.128/26 host="ip-172-31-25-116" Jan 17 00:28:21.743317 containerd[1969]: 2026-01-17 00:28:21.658 [INFO][5076] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.128/26 host="ip-172-31-25-116" Jan 17 00:28:21.743317 containerd[1969]: 2026-01-17 00:28:21.665 [INFO][5076] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ip-172-31-25-116" Jan 17 00:28:21.743317 containerd[1969]: 2026-01-17 00:28:21.665 [INFO][5076] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.7cdf1caebcccbcc1b953ba7c63251016ca75f9f14091d04597f96c56826c3b23" host="ip-172-31-25-116" Jan 17 00:28:21.743317 containerd[1969]: 2026-01-17 00:28:21.668 [INFO][5076] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7cdf1caebcccbcc1b953ba7c63251016ca75f9f14091d04597f96c56826c3b23 Jan 17 00:28:21.743317 containerd[1969]: 2026-01-17 00:28:21.690 [INFO][5076] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.7cdf1caebcccbcc1b953ba7c63251016ca75f9f14091d04597f96c56826c3b23" host="ip-172-31-25-116" Jan 17 00:28:21.743317 containerd[1969]: 2026-01-17 00:28:21.699 [INFO][5076] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.47.131/26] block=192.168.47.128/26 handle="k8s-pod-network.7cdf1caebcccbcc1b953ba7c63251016ca75f9f14091d04597f96c56826c3b23" host="ip-172-31-25-116" Jan 17 00:28:21.743317 containerd[1969]: 2026-01-17 00:28:21.699 [INFO][5076] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.131/26] handle="k8s-pod-network.7cdf1caebcccbcc1b953ba7c63251016ca75f9f14091d04597f96c56826c3b23" host="ip-172-31-25-116" Jan 17 00:28:21.743317 containerd[1969]: 2026-01-17 00:28:21.700 [INFO][5076] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:28:21.743317 containerd[1969]: 2026-01-17 00:28:21.700 [INFO][5076] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.47.131/26] IPv6=[] ContainerID="7cdf1caebcccbcc1b953ba7c63251016ca75f9f14091d04597f96c56826c3b23" HandleID="k8s-pod-network.7cdf1caebcccbcc1b953ba7c63251016ca75f9f14091d04597f96c56826c3b23" Workload="ip--172--31--25--116-k8s-goldmane--666569f655--6bww6-eth0" Jan 17 00:28:21.745532 containerd[1969]: 2026-01-17 00:28:21.706 [INFO][5039] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7cdf1caebcccbcc1b953ba7c63251016ca75f9f14091d04597f96c56826c3b23" Namespace="calico-system" Pod="goldmane-666569f655-6bww6" WorkloadEndpoint="ip--172--31--25--116-k8s-goldmane--666569f655--6bww6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--116-k8s-goldmane--666569f655--6bww6-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"394e468b-e5d2-4096-94d5-a6a60d966235", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 27, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-116", ContainerID:"", Pod:"goldmane-666569f655-6bww6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.47.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9f3dfb8abd9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:28:21.745532 containerd[1969]: 2026-01-17 00:28:21.706 [INFO][5039] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.131/32] ContainerID="7cdf1caebcccbcc1b953ba7c63251016ca75f9f14091d04597f96c56826c3b23" Namespace="calico-system" Pod="goldmane-666569f655-6bww6" WorkloadEndpoint="ip--172--31--25--116-k8s-goldmane--666569f655--6bww6-eth0" Jan 17 00:28:21.745532 containerd[1969]: 2026-01-17 00:28:21.706 [INFO][5039] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9f3dfb8abd9 ContainerID="7cdf1caebcccbcc1b953ba7c63251016ca75f9f14091d04597f96c56826c3b23" Namespace="calico-system" Pod="goldmane-666569f655-6bww6" WorkloadEndpoint="ip--172--31--25--116-k8s-goldmane--666569f655--6bww6-eth0" Jan 17 00:28:21.745532 containerd[1969]: 2026-01-17 00:28:21.717 [INFO][5039] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7cdf1caebcccbcc1b953ba7c63251016ca75f9f14091d04597f96c56826c3b23" Namespace="calico-system" Pod="goldmane-666569f655-6bww6" WorkloadEndpoint="ip--172--31--25--116-k8s-goldmane--666569f655--6bww6-eth0" Jan 17 00:28:21.745532 containerd[1969]: 2026-01-17 00:28:21.720 [INFO][5039] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7cdf1caebcccbcc1b953ba7c63251016ca75f9f14091d04597f96c56826c3b23" Namespace="calico-system" Pod="goldmane-666569f655-6bww6" WorkloadEndpoint="ip--172--31--25--116-k8s-goldmane--666569f655--6bww6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--116-k8s-goldmane--666569f655--6bww6-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"394e468b-e5d2-4096-94d5-a6a60d966235", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 27, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-116", ContainerID:"7cdf1caebcccbcc1b953ba7c63251016ca75f9f14091d04597f96c56826c3b23", Pod:"goldmane-666569f655-6bww6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.47.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9f3dfb8abd9", MAC:"5e:67:9d:52:cb:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:28:21.745532 containerd[1969]: 2026-01-17 00:28:21.740 [INFO][5039] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7cdf1caebcccbcc1b953ba7c63251016ca75f9f14091d04597f96c56826c3b23" Namespace="calico-system" Pod="goldmane-666569f655-6bww6" WorkloadEndpoint="ip--172--31--25--116-k8s-goldmane--666569f655--6bww6-eth0" Jan 17 00:28:21.789703 containerd[1969]: time="2026-01-17T00:28:21.789343190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:28:21.789703 containerd[1969]: time="2026-01-17T00:28:21.789414417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:28:21.789703 containerd[1969]: time="2026-01-17T00:28:21.789429990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:21.789881 containerd[1969]: time="2026-01-17T00:28:21.789525267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:21.808393 systemd-networkd[1897]: cali7f3bd27d291: Link UP Jan 17 00:28:21.809572 systemd-networkd[1897]: cali7f3bd27d291: Gained carrier Jan 17 00:28:21.841083 systemd[1]: Started cri-containerd-7cdf1caebcccbcc1b953ba7c63251016ca75f9f14091d04597f96c56826c3b23.scope - libcontainer container 7cdf1caebcccbcc1b953ba7c63251016ca75f9f14091d04597f96c56826c3b23. Jan 17 00:28:21.867987 containerd[1969]: 2026-01-17 00:28:21.554 [INFO][5049] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--116-k8s-coredns--674b8bbfcf--6m4tc-eth0 coredns-674b8bbfcf- kube-system 67a190ca-72c5-48e2-b272-116175d17788 978 0 2026-01-17 00:27:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-25-116 coredns-674b8bbfcf-6m4tc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7f3bd27d291 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4fde1b80a9e399cedf6542dd9d4f885ab9d55f1f94270150e62d41e66e4166b1" Namespace="kube-system" Pod="coredns-674b8bbfcf-6m4tc" WorkloadEndpoint="ip--172--31--25--116-k8s-coredns--674b8bbfcf--6m4tc-" Jan 17 00:28:21.867987 containerd[1969]: 2026-01-17 00:28:21.554 [INFO][5049] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4fde1b80a9e399cedf6542dd9d4f885ab9d55f1f94270150e62d41e66e4166b1" Namespace="kube-system" Pod="coredns-674b8bbfcf-6m4tc" WorkloadEndpoint="ip--172--31--25--116-k8s-coredns--674b8bbfcf--6m4tc-eth0" Jan 17 00:28:21.867987 containerd[1969]: 2026-01-17 00:28:21.666 [INFO][5075] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4fde1b80a9e399cedf6542dd9d4f885ab9d55f1f94270150e62d41e66e4166b1" HandleID="k8s-pod-network.4fde1b80a9e399cedf6542dd9d4f885ab9d55f1f94270150e62d41e66e4166b1" Workload="ip--172--31--25--116-k8s-coredns--674b8bbfcf--6m4tc-eth0" Jan 17 00:28:21.867987 containerd[1969]: 2026-01-17 00:28:21.666 [INFO][5075] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4fde1b80a9e399cedf6542dd9d4f885ab9d55f1f94270150e62d41e66e4166b1" HandleID="k8s-pod-network.4fde1b80a9e399cedf6542dd9d4f885ab9d55f1f94270150e62d41e66e4166b1" Workload="ip--172--31--25--116-k8s-coredns--674b8bbfcf--6m4tc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e170), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-25-116", "pod":"coredns-674b8bbfcf-6m4tc", "timestamp":"2026-01-17 00:28:21.666282404 +0000 UTC"}, Hostname:"ip-172-31-25-116", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:28:21.867987 containerd[1969]: 2026-01-17 00:28:21.666 [INFO][5075] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:28:21.867987 containerd[1969]: 2026-01-17 00:28:21.699 [INFO][5075] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:28:21.867987 containerd[1969]: 2026-01-17 00:28:21.699 [INFO][5075] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-116' Jan 17 00:28:21.867987 containerd[1969]: 2026-01-17 00:28:21.735 [INFO][5075] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4fde1b80a9e399cedf6542dd9d4f885ab9d55f1f94270150e62d41e66e4166b1" host="ip-172-31-25-116" Jan 17 00:28:21.867987 containerd[1969]: 2026-01-17 00:28:21.746 [INFO][5075] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-116" Jan 17 00:28:21.867987 containerd[1969]: 2026-01-17 00:28:21.756 [INFO][5075] ipam/ipam.go 511: Trying affinity for 192.168.47.128/26 host="ip-172-31-25-116" Jan 17 00:28:21.867987 containerd[1969]: 2026-01-17 00:28:21.760 [INFO][5075] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.128/26 host="ip-172-31-25-116" Jan 17 00:28:21.867987 containerd[1969]: 2026-01-17 00:28:21.765 [INFO][5075] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ip-172-31-25-116" Jan 17 00:28:21.867987 containerd[1969]: 2026-01-17 00:28:21.765 [INFO][5075] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.4fde1b80a9e399cedf6542dd9d4f885ab9d55f1f94270150e62d41e66e4166b1" host="ip-172-31-25-116" Jan 17 00:28:21.867987 containerd[1969]: 2026-01-17 00:28:21.771 [INFO][5075] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4fde1b80a9e399cedf6542dd9d4f885ab9d55f1f94270150e62d41e66e4166b1 Jan 17 00:28:21.867987 containerd[1969]: 2026-01-17 00:28:21.780 [INFO][5075] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.4fde1b80a9e399cedf6542dd9d4f885ab9d55f1f94270150e62d41e66e4166b1" host="ip-172-31-25-116" Jan 17 00:28:21.867987 containerd[1969]: 2026-01-17 00:28:21.795 [INFO][5075] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.47.132/26] block=192.168.47.128/26 handle="k8s-pod-network.4fde1b80a9e399cedf6542dd9d4f885ab9d55f1f94270150e62d41e66e4166b1" host="ip-172-31-25-116" Jan 17 00:28:21.867987 containerd[1969]: 2026-01-17 00:28:21.795 [INFO][5075] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.132/26] handle="k8s-pod-network.4fde1b80a9e399cedf6542dd9d4f885ab9d55f1f94270150e62d41e66e4166b1" host="ip-172-31-25-116" Jan 17 00:28:21.867987 containerd[1969]: 2026-01-17 00:28:21.795 [INFO][5075] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:28:21.867987 containerd[1969]: 2026-01-17 00:28:21.795 [INFO][5075] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.47.132/26] IPv6=[] ContainerID="4fde1b80a9e399cedf6542dd9d4f885ab9d55f1f94270150e62d41e66e4166b1" HandleID="k8s-pod-network.4fde1b80a9e399cedf6542dd9d4f885ab9d55f1f94270150e62d41e66e4166b1" Workload="ip--172--31--25--116-k8s-coredns--674b8bbfcf--6m4tc-eth0" Jan 17 00:28:21.869664 containerd[1969]: 2026-01-17 00:28:21.800 [INFO][5049] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4fde1b80a9e399cedf6542dd9d4f885ab9d55f1f94270150e62d41e66e4166b1" Namespace="kube-system" Pod="coredns-674b8bbfcf-6m4tc" WorkloadEndpoint="ip--172--31--25--116-k8s-coredns--674b8bbfcf--6m4tc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--116-k8s-coredns--674b8bbfcf--6m4tc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"67a190ca-72c5-48e2-b272-116175d17788", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 27, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-116", ContainerID:"", Pod:"coredns-674b8bbfcf-6m4tc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7f3bd27d291", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:28:21.869664 containerd[1969]: 2026-01-17 00:28:21.801 [INFO][5049] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.132/32] ContainerID="4fde1b80a9e399cedf6542dd9d4f885ab9d55f1f94270150e62d41e66e4166b1" Namespace="kube-system" Pod="coredns-674b8bbfcf-6m4tc" WorkloadEndpoint="ip--172--31--25--116-k8s-coredns--674b8bbfcf--6m4tc-eth0" Jan 17 00:28:21.869664 containerd[1969]: 2026-01-17 00:28:21.801 [INFO][5049] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7f3bd27d291 ContainerID="4fde1b80a9e399cedf6542dd9d4f885ab9d55f1f94270150e62d41e66e4166b1" Namespace="kube-system" Pod="coredns-674b8bbfcf-6m4tc" WorkloadEndpoint="ip--172--31--25--116-k8s-coredns--674b8bbfcf--6m4tc-eth0" Jan 17 00:28:21.869664 containerd[1969]: 2026-01-17 00:28:21.818 [INFO][5049] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4fde1b80a9e399cedf6542dd9d4f885ab9d55f1f94270150e62d41e66e4166b1" Namespace="kube-system" Pod="coredns-674b8bbfcf-6m4tc" WorkloadEndpoint="ip--172--31--25--116-k8s-coredns--674b8bbfcf--6m4tc-eth0" Jan 17 00:28:21.869664 containerd[1969]: 2026-01-17 00:28:21.827 [INFO][5049] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4fde1b80a9e399cedf6542dd9d4f885ab9d55f1f94270150e62d41e66e4166b1" Namespace="kube-system" Pod="coredns-674b8bbfcf-6m4tc" WorkloadEndpoint="ip--172--31--25--116-k8s-coredns--674b8bbfcf--6m4tc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--116-k8s-coredns--674b8bbfcf--6m4tc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"67a190ca-72c5-48e2-b272-116175d17788", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 27, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-116", ContainerID:"4fde1b80a9e399cedf6542dd9d4f885ab9d55f1f94270150e62d41e66e4166b1", Pod:"coredns-674b8bbfcf-6m4tc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7f3bd27d291", MAC:"d2:43:f4:58:d3:c3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:28:21.869664 containerd[1969]: 2026-01-17 00:28:21.855 [INFO][5049] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4fde1b80a9e399cedf6542dd9d4f885ab9d55f1f94270150e62d41e66e4166b1" Namespace="kube-system" Pod="coredns-674b8bbfcf-6m4tc" WorkloadEndpoint="ip--172--31--25--116-k8s-coredns--674b8bbfcf--6m4tc-eth0" Jan 17 00:28:21.913239 containerd[1969]: time="2026-01-17T00:28:21.913142167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:28:21.913431 containerd[1969]: time="2026-01-17T00:28:21.913387574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:28:21.914465 containerd[1969]: time="2026-01-17T00:28:21.913689740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:21.914465 containerd[1969]: time="2026-01-17T00:28:21.913838133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:21.934949 systemd-networkd[1897]: cali3a52d1adafb: Link UP Jan 17 00:28:21.936966 systemd-networkd[1897]: cali3a52d1adafb: Gained carrier Jan 17 00:28:21.957962 systemd[1]: Started cri-containerd-4fde1b80a9e399cedf6542dd9d4f885ab9d55f1f94270150e62d41e66e4166b1.scope - libcontainer container 4fde1b80a9e399cedf6542dd9d4f885ab9d55f1f94270150e62d41e66e4166b1. Jan 17 00:28:21.977046 containerd[1969]: 2026-01-17 00:28:21.585 [INFO][5059] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--116-k8s-calico--kube--controllers--877bf5958--fmwqm-eth0 calico-kube-controllers-877bf5958- calico-system 5fd74b61-87d1-45e4-b949-57645e5eb510 980 0 2026-01-17 00:27:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:877bf5958 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-25-116 calico-kube-controllers-877bf5958-fmwqm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3a52d1adafb [] [] }} ContainerID="ad6ff8343b037b464c6113d2bc831745f879aea947b8357da0bcae6a6d8744a0" Namespace="calico-system" Pod="calico-kube-controllers-877bf5958-fmwqm" WorkloadEndpoint="ip--172--31--25--116-k8s-calico--kube--controllers--877bf5958--fmwqm-" Jan 17 00:28:21.977046 containerd[1969]: 2026-01-17 00:28:21.586 [INFO][5059] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ad6ff8343b037b464c6113d2bc831745f879aea947b8357da0bcae6a6d8744a0" Namespace="calico-system" Pod="calico-kube-controllers-877bf5958-fmwqm" WorkloadEndpoint="ip--172--31--25--116-k8s-calico--kube--controllers--877bf5958--fmwqm-eth0" Jan 17 00:28:21.977046 containerd[1969]: 2026-01-17 00:28:21.693 [INFO][5087] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad6ff8343b037b464c6113d2bc831745f879aea947b8357da0bcae6a6d8744a0" HandleID="k8s-pod-network.ad6ff8343b037b464c6113d2bc831745f879aea947b8357da0bcae6a6d8744a0" Workload="ip--172--31--25--116-k8s-calico--kube--controllers--877bf5958--fmwqm-eth0" Jan 17 00:28:21.977046 containerd[1969]: 2026-01-17 00:28:21.694 [INFO][5087] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ad6ff8343b037b464c6113d2bc831745f879aea947b8357da0bcae6a6d8744a0" HandleID="k8s-pod-network.ad6ff8343b037b464c6113d2bc831745f879aea947b8357da0bcae6a6d8744a0" Workload="ip--172--31--25--116-k8s-calico--kube--controllers--877bf5958--fmwqm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011ec70), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-116", "pod":"calico-kube-controllers-877bf5958-fmwqm", "timestamp":"2026-01-17 00:28:21.693954686 +0000 UTC"}, Hostname:"ip-172-31-25-116", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:28:21.977046 containerd[1969]: 2026-01-17 00:28:21.694 [INFO][5087] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:28:21.977046 containerd[1969]: 2026-01-17 00:28:21.795 [INFO][5087] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:28:21.977046 containerd[1969]: 2026-01-17 00:28:21.795 [INFO][5087] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-116' Jan 17 00:28:21.977046 containerd[1969]: 2026-01-17 00:28:21.843 [INFO][5087] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ad6ff8343b037b464c6113d2bc831745f879aea947b8357da0bcae6a6d8744a0" host="ip-172-31-25-116" Jan 17 00:28:21.977046 containerd[1969]: 2026-01-17 00:28:21.856 [INFO][5087] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-116" Jan 17 00:28:21.977046 containerd[1969]: 2026-01-17 00:28:21.871 [INFO][5087] ipam/ipam.go 511: Trying affinity for 192.168.47.128/26 host="ip-172-31-25-116" Jan 17 00:28:21.977046 containerd[1969]: 2026-01-17 00:28:21.876 [INFO][5087] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.128/26 host="ip-172-31-25-116" Jan 17 00:28:21.977046 containerd[1969]: 2026-01-17 00:28:21.887 [INFO][5087] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ip-172-31-25-116" Jan 17 00:28:21.977046 containerd[1969]: 2026-01-17 00:28:21.887 [INFO][5087] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.ad6ff8343b037b464c6113d2bc831745f879aea947b8357da0bcae6a6d8744a0" host="ip-172-31-25-116" Jan 17 00:28:21.977046 containerd[1969]: 2026-01-17 00:28:21.892 [INFO][5087] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ad6ff8343b037b464c6113d2bc831745f879aea947b8357da0bcae6a6d8744a0 Jan 17 00:28:21.977046 containerd[1969]: 2026-01-17 00:28:21.903 [INFO][5087] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.ad6ff8343b037b464c6113d2bc831745f879aea947b8357da0bcae6a6d8744a0" host="ip-172-31-25-116" Jan 17 00:28:21.977046 containerd[1969]: 2026-01-17 00:28:21.922 [INFO][5087] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.47.133/26] block=192.168.47.128/26 handle="k8s-pod-network.ad6ff8343b037b464c6113d2bc831745f879aea947b8357da0bcae6a6d8744a0" host="ip-172-31-25-116" Jan 17 00:28:21.977046 containerd[1969]: 2026-01-17 00:28:21.922 [INFO][5087] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.133/26] handle="k8s-pod-network.ad6ff8343b037b464c6113d2bc831745f879aea947b8357da0bcae6a6d8744a0" host="ip-172-31-25-116" Jan 17 00:28:21.977046 containerd[1969]: 2026-01-17 00:28:21.922 [INFO][5087] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:28:21.977046 containerd[1969]: 2026-01-17 00:28:21.922 [INFO][5087] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.47.133/26] IPv6=[] ContainerID="ad6ff8343b037b464c6113d2bc831745f879aea947b8357da0bcae6a6d8744a0" HandleID="k8s-pod-network.ad6ff8343b037b464c6113d2bc831745f879aea947b8357da0bcae6a6d8744a0" Workload="ip--172--31--25--116-k8s-calico--kube--controllers--877bf5958--fmwqm-eth0" Jan 17 00:28:21.977676 containerd[1969]: 2026-01-17 00:28:21.929 [INFO][5059] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ad6ff8343b037b464c6113d2bc831745f879aea947b8357da0bcae6a6d8744a0" Namespace="calico-system" Pod="calico-kube-controllers-877bf5958-fmwqm" WorkloadEndpoint="ip--172--31--25--116-k8s-calico--kube--controllers--877bf5958--fmwqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--116-k8s-calico--kube--controllers--877bf5958--fmwqm-eth0", GenerateName:"calico-kube-controllers-877bf5958-", Namespace:"calico-system", SelfLink:"", UID:"5fd74b61-87d1-45e4-b949-57645e5eb510", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"877bf5958", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-116", ContainerID:"", Pod:"calico-kube-controllers-877bf5958-fmwqm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3a52d1adafb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:28:21.977676 containerd[1969]: 2026-01-17 00:28:21.929 [INFO][5059] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.133/32] ContainerID="ad6ff8343b037b464c6113d2bc831745f879aea947b8357da0bcae6a6d8744a0" Namespace="calico-system" Pod="calico-kube-controllers-877bf5958-fmwqm" WorkloadEndpoint="ip--172--31--25--116-k8s-calico--kube--controllers--877bf5958--fmwqm-eth0" Jan 17 00:28:21.977676 containerd[1969]: 2026-01-17 00:28:21.930 [INFO][5059] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3a52d1adafb ContainerID="ad6ff8343b037b464c6113d2bc831745f879aea947b8357da0bcae6a6d8744a0" Namespace="calico-system" Pod="calico-kube-controllers-877bf5958-fmwqm" WorkloadEndpoint="ip--172--31--25--116-k8s-calico--kube--controllers--877bf5958--fmwqm-eth0" Jan 17 00:28:21.977676 containerd[1969]: 2026-01-17 00:28:21.937 [INFO][5059] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad6ff8343b037b464c6113d2bc831745f879aea947b8357da0bcae6a6d8744a0" Namespace="calico-system" Pod="calico-kube-controllers-877bf5958-fmwqm" WorkloadEndpoint="ip--172--31--25--116-k8s-calico--kube--controllers--877bf5958--fmwqm-eth0" Jan 17 00:28:21.977676 containerd[1969]: 2026-01-17 00:28:21.940 [INFO][5059] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ad6ff8343b037b464c6113d2bc831745f879aea947b8357da0bcae6a6d8744a0" Namespace="calico-system" Pod="calico-kube-controllers-877bf5958-fmwqm" WorkloadEndpoint="ip--172--31--25--116-k8s-calico--kube--controllers--877bf5958--fmwqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--116-k8s-calico--kube--controllers--877bf5958--fmwqm-eth0", GenerateName:"calico-kube-controllers-877bf5958-", Namespace:"calico-system", SelfLink:"", UID:"5fd74b61-87d1-45e4-b949-57645e5eb510", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"877bf5958", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-116", ContainerID:"ad6ff8343b037b464c6113d2bc831745f879aea947b8357da0bcae6a6d8744a0", Pod:"calico-kube-controllers-877bf5958-fmwqm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3a52d1adafb", MAC:"12:f8:bc:bd:ae:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:28:21.977676 containerd[1969]: 2026-01-17 00:28:21.968 [INFO][5059] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ad6ff8343b037b464c6113d2bc831745f879aea947b8357da0bcae6a6d8744a0" Namespace="calico-system" Pod="calico-kube-controllers-877bf5958-fmwqm" WorkloadEndpoint="ip--172--31--25--116-k8s-calico--kube--controllers--877bf5958--fmwqm-eth0" Jan 17 00:28:22.037434 containerd[1969]: time="2026-01-17T00:28:22.036989203Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:28:22.038099 containerd[1969]: time="2026-01-17T00:28:22.037519244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:28:22.038099 containerd[1969]: time="2026-01-17T00:28:22.037550933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:22.038099 containerd[1969]: time="2026-01-17T00:28:22.037661038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:22.084622 containerd[1969]: time="2026-01-17T00:28:22.084529075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6m4tc,Uid:67a190ca-72c5-48e2-b272-116175d17788,Namespace:kube-system,Attempt:1,} returns sandbox id \"4fde1b80a9e399cedf6542dd9d4f885ab9d55f1f94270150e62d41e66e4166b1\"" Jan 17 00:28:22.095526 systemd[1]: Started cri-containerd-ad6ff8343b037b464c6113d2bc831745f879aea947b8357da0bcae6a6d8744a0.scope - libcontainer container ad6ff8343b037b464c6113d2bc831745f879aea947b8357da0bcae6a6d8744a0. Jan 17 00:28:22.107846 containerd[1969]: time="2026-01-17T00:28:22.107603763Z" level=info msg="CreateContainer within sandbox \"4fde1b80a9e399cedf6542dd9d4f885ab9d55f1f94270150e62d41e66e4166b1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:28:22.124962 containerd[1969]: time="2026-01-17T00:28:22.124707243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-6bww6,Uid:394e468b-e5d2-4096-94d5-a6a60d966235,Namespace:calico-system,Attempt:1,} returns sandbox id \"7cdf1caebcccbcc1b953ba7c63251016ca75f9f14091d04597f96c56826c3b23\"" Jan 17 00:28:22.132700 containerd[1969]: time="2026-01-17T00:28:22.132561990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:28:22.159586 containerd[1969]: time="2026-01-17T00:28:22.159539339Z" level=info msg="CreateContainer within sandbox \"4fde1b80a9e399cedf6542dd9d4f885ab9d55f1f94270150e62d41e66e4166b1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"27654bc07dfb31e9c50dc0e3b7f83d04dfb7244c026730cc0251d6201d9a2b09\"" Jan 17 00:28:22.163928 containerd[1969]: time="2026-01-17T00:28:22.163879814Z" level=info msg="StartContainer for \"27654bc07dfb31e9c50dc0e3b7f83d04dfb7244c026730cc0251d6201d9a2b09\"" Jan 17 00:28:22.218205 systemd[1]: Started cri-containerd-27654bc07dfb31e9c50dc0e3b7f83d04dfb7244c026730cc0251d6201d9a2b09.scope - libcontainer container 27654bc07dfb31e9c50dc0e3b7f83d04dfb7244c026730cc0251d6201d9a2b09. Jan 17 00:28:22.238802 containerd[1969]: time="2026-01-17T00:28:22.238149535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-877bf5958-fmwqm,Uid:5fd74b61-87d1-45e4-b949-57645e5eb510,Namespace:calico-system,Attempt:1,} returns sandbox id \"ad6ff8343b037b464c6113d2bc831745f879aea947b8357da0bcae6a6d8744a0\"" Jan 17 00:28:22.269572 containerd[1969]: time="2026-01-17T00:28:22.269540520Z" level=info msg="StopPodSandbox for \"165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f\"" Jan 17 00:28:22.270303 containerd[1969]: time="2026-01-17T00:28:22.270193960Z" level=info msg="StartContainer for \"27654bc07dfb31e9c50dc0e3b7f83d04dfb7244c026730cc0251d6201d9a2b09\" returns successfully" Jan 17 00:28:22.273633 containerd[1969]: time="2026-01-17T00:28:22.269541885Z" level=info msg="StopPodSandbox for \"aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2\"" Jan 17 00:28:22.434598 containerd[1969]: 2026-01-17 00:28:22.344 [INFO][5291] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" Jan 17 00:28:22.434598 containerd[1969]: 2026-01-17 00:28:22.346 [INFO][5291] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" iface="eth0" netns="/var/run/netns/cni-8bfef0bc-8f1a-ae25-0138-c5174dc06484" Jan 17 00:28:22.434598 containerd[1969]: 2026-01-17 00:28:22.347 [INFO][5291] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" iface="eth0" netns="/var/run/netns/cni-8bfef0bc-8f1a-ae25-0138-c5174dc06484" Jan 17 00:28:22.434598 containerd[1969]: 2026-01-17 00:28:22.347 [INFO][5291] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" iface="eth0" netns="/var/run/netns/cni-8bfef0bc-8f1a-ae25-0138-c5174dc06484" Jan 17 00:28:22.434598 containerd[1969]: 2026-01-17 00:28:22.347 [INFO][5291] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" Jan 17 00:28:22.434598 containerd[1969]: 2026-01-17 00:28:22.347 [INFO][5291] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" Jan 17 00:28:22.434598 containerd[1969]: 2026-01-17 00:28:22.395 [INFO][5322] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" HandleID="k8s-pod-network.aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" Workload="ip--172--31--25--116-k8s-csi--node--driver--5p9mr-eth0" Jan 17 00:28:22.434598 containerd[1969]: 2026-01-17 00:28:22.396 [INFO][5322] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:28:22.434598 containerd[1969]: 2026-01-17 00:28:22.396 [INFO][5322] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:28:22.434598 containerd[1969]: 2026-01-17 00:28:22.409 [WARNING][5322] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" HandleID="k8s-pod-network.aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" Workload="ip--172--31--25--116-k8s-csi--node--driver--5p9mr-eth0" Jan 17 00:28:22.434598 containerd[1969]: 2026-01-17 00:28:22.409 [INFO][5322] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" HandleID="k8s-pod-network.aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" Workload="ip--172--31--25--116-k8s-csi--node--driver--5p9mr-eth0" Jan 17 00:28:22.434598 containerd[1969]: 2026-01-17 00:28:22.416 [INFO][5322] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:28:22.434598 containerd[1969]: 2026-01-17 00:28:22.419 [INFO][5291] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" Jan 17 00:28:22.434598 containerd[1969]: time="2026-01-17T00:28:22.433503825Z" level=info msg="TearDown network for sandbox \"aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2\" successfully" Jan 17 00:28:22.434598 containerd[1969]: time="2026-01-17T00:28:22.433535236Z" level=info msg="StopPodSandbox for \"aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2\" returns successfully" Jan 17 00:28:22.434598 containerd[1969]: time="2026-01-17T00:28:22.434302472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5p9mr,Uid:c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9,Namespace:calico-system,Attempt:1,}" Jan 17 00:28:22.439604 systemd[1]: run-netns-cni\x2d8bfef0bc\x2d8f1a\x2dae25\x2d0138\x2dc5174dc06484.mount: Deactivated successfully. Jan 17 00:28:22.441136 containerd[1969]: time="2026-01-17T00:28:22.440451448Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:28:22.443884 containerd[1969]: time="2026-01-17T00:28:22.443838477Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:28:22.444413 containerd[1969]: time="2026-01-17T00:28:22.443938821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:28:22.444524 kubelet[3181]: E0117 00:28:22.444201 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:28:22.444524 kubelet[3181]: E0117 00:28:22.444319 3181 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:28:22.447000 kubelet[3181]: E0117 00:28:22.445528 3181 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zc8vm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-6bww6_calico-system(394e468b-e5d2-4096-94d5-a6a60d966235): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:28:22.447000 kubelet[3181]: E0117 00:28:22.446772 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6bww6" podUID="394e468b-e5d2-4096-94d5-a6a60d966235" Jan 17 00:28:22.447348 containerd[1969]: time="2026-01-17T00:28:22.445364219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:28:22.458653 containerd[1969]: 2026-01-17 00:28:22.340 [INFO][5304] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" Jan 17 00:28:22.458653 containerd[1969]: 2026-01-17 00:28:22.341 [INFO][5304] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" iface="eth0" netns="/var/run/netns/cni-7fb20a9f-e728-6775-d806-88109ba5d498" Jan 17 00:28:22.458653 containerd[1969]: 2026-01-17 00:28:22.341 [INFO][5304] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" iface="eth0" netns="/var/run/netns/cni-7fb20a9f-e728-6775-d806-88109ba5d498" Jan 17 00:28:22.458653 containerd[1969]: 2026-01-17 00:28:22.341 [INFO][5304] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" iface="eth0" netns="/var/run/netns/cni-7fb20a9f-e728-6775-d806-88109ba5d498" Jan 17 00:28:22.458653 containerd[1969]: 2026-01-17 00:28:22.341 [INFO][5304] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" Jan 17 00:28:22.458653 containerd[1969]: 2026-01-17 00:28:22.342 [INFO][5304] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" Jan 17 00:28:22.458653 containerd[1969]: 2026-01-17 00:28:22.406 [INFO][5317] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" HandleID="k8s-pod-network.165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" Workload="ip--172--31--25--116-k8s-coredns--674b8bbfcf--wldcl-eth0" Jan 17 00:28:22.458653 containerd[1969]: 2026-01-17 00:28:22.406 [INFO][5317] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:28:22.458653 containerd[1969]: 2026-01-17 00:28:22.415 [INFO][5317] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:28:22.458653 containerd[1969]: 2026-01-17 00:28:22.448 [WARNING][5317] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" HandleID="k8s-pod-network.165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" Workload="ip--172--31--25--116-k8s-coredns--674b8bbfcf--wldcl-eth0" Jan 17 00:28:22.458653 containerd[1969]: 2026-01-17 00:28:22.448 [INFO][5317] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" HandleID="k8s-pod-network.165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" Workload="ip--172--31--25--116-k8s-coredns--674b8bbfcf--wldcl-eth0" Jan 17 00:28:22.458653 containerd[1969]: 2026-01-17 00:28:22.452 [INFO][5317] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:28:22.458653 containerd[1969]: 2026-01-17 00:28:22.455 [INFO][5304] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" Jan 17 00:28:22.463640 systemd[1]: run-netns-cni\x2d7fb20a9f\x2de728\x2d6775\x2dd806\x2d88109ba5d498.mount: Deactivated successfully. Jan 17 00:28:22.464297 containerd[1969]: time="2026-01-17T00:28:22.464021138Z" level=info msg="TearDown network for sandbox \"165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f\" successfully" Jan 17 00:28:22.464297 containerd[1969]: time="2026-01-17T00:28:22.464051688Z" level=info msg="StopPodSandbox for \"165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f\" returns successfully" Jan 17 00:28:22.469161 containerd[1969]: time="2026-01-17T00:28:22.466844425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wldcl,Uid:dc3409d6-ff21-405a-b461-9d804b643b66,Namespace:kube-system,Attempt:1,}" Jan 17 00:28:22.663025 kubelet[3181]: E0117 00:28:22.662330 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6bww6" podUID="394e468b-e5d2-4096-94d5-a6a60d966235" Jan 17 00:28:22.678936 kubelet[3181]: E0117 00:28:22.678414 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6747446b5-7hcx6" podUID="cc407ca1-a787-4c80-b23e-a6c88347fad4" Jan 17 00:28:22.713870 systemd-networkd[1897]: calib9c9f76ed17: Link UP Jan 17 00:28:22.719875 systemd-networkd[1897]: calib9c9f76ed17: Gained carrier Jan 17 00:28:22.747554 kubelet[3181]: I0117 00:28:22.747484 3181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6m4tc" podStartSLOduration=41.747459748 podStartE2EDuration="41.747459748s" podCreationTimestamp="2026-01-17 00:27:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:28:22.743960637 +0000 UTC m=+46.765851503" watchObservedRunningTime="2026-01-17 00:28:22.747459748 +0000 UTC m=+46.769350613" Jan 17 00:28:22.751775 containerd[1969]: time="2026-01-17T00:28:22.750976973Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:28:22.754799 containerd[1969]: time="2026-01-17T00:28:22.753240639Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:28:22.754799 containerd[1969]: time="2026-01-17T00:28:22.753288186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:28:22.754987 kubelet[3181]: E0117 00:28:22.753550 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:28:22.754987 kubelet[3181]: E0117 00:28:22.753600 3181 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:28:22.754987 kubelet[3181]: E0117 00:28:22.754395 3181 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cnjs6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-877bf5958-fmwqm_calico-system(5fd74b61-87d1-45e4-b949-57645e5eb510): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:28:22.756957 kubelet[3181]: E0117 00:28:22.755643 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-877bf5958-fmwqm" podUID="5fd74b61-87d1-45e4-b949-57645e5eb510" Jan 17 00:28:22.794924 containerd[1969]: 2026-01-17 00:28:22.546 [INFO][5331] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--116-k8s-csi--node--driver--5p9mr-eth0 csi-node-driver- calico-system c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9 1003 0 2026-01-17 00:27:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-25-116 csi-node-driver-5p9mr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib9c9f76ed17 [] [] }} ContainerID="ba23bbc6d669a742a006541db3db1948e3301f4c65758f2c2d7d6488dcbd17fd" Namespace="calico-system" Pod="csi-node-driver-5p9mr" WorkloadEndpoint="ip--172--31--25--116-k8s-csi--node--driver--5p9mr-" Jan 17 00:28:22.794924 containerd[1969]: 2026-01-17 00:28:22.546 [INFO][5331] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ba23bbc6d669a742a006541db3db1948e3301f4c65758f2c2d7d6488dcbd17fd" Namespace="calico-system" Pod="csi-node-driver-5p9mr" WorkloadEndpoint="ip--172--31--25--116-k8s-csi--node--driver--5p9mr-eth0" Jan 17 00:28:22.794924 containerd[1969]: 2026-01-17 00:28:22.621 [INFO][5354] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba23bbc6d669a742a006541db3db1948e3301f4c65758f2c2d7d6488dcbd17fd" HandleID="k8s-pod-network.ba23bbc6d669a742a006541db3db1948e3301f4c65758f2c2d7d6488dcbd17fd" Workload="ip--172--31--25--116-k8s-csi--node--driver--5p9mr-eth0" Jan 17 00:28:22.794924 containerd[1969]: 2026-01-17 00:28:22.622 [INFO][5354] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ba23bbc6d669a742a006541db3db1948e3301f4c65758f2c2d7d6488dcbd17fd" HandleID="k8s-pod-network.ba23bbc6d669a742a006541db3db1948e3301f4c65758f2c2d7d6488dcbd17fd" Workload="ip--172--31--25--116-k8s-csi--node--driver--5p9mr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5010), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-116", "pod":"csi-node-driver-5p9mr", "timestamp":"2026-01-17 00:28:22.621906761 +0000 UTC"}, Hostname:"ip-172-31-25-116", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:28:22.794924 containerd[1969]: 2026-01-17 00:28:22.622 [INFO][5354] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:28:22.794924 containerd[1969]: 2026-01-17 00:28:22.622 [INFO][5354] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:28:22.794924 containerd[1969]: 2026-01-17 00:28:22.622 [INFO][5354] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-116' Jan 17 00:28:22.794924 containerd[1969]: 2026-01-17 00:28:22.632 [INFO][5354] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ba23bbc6d669a742a006541db3db1948e3301f4c65758f2c2d7d6488dcbd17fd" host="ip-172-31-25-116" Jan 17 00:28:22.794924 containerd[1969]: 2026-01-17 00:28:22.640 [INFO][5354] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-116" Jan 17 00:28:22.794924 containerd[1969]: 2026-01-17 00:28:22.651 [INFO][5354] ipam/ipam.go 511: Trying affinity for 192.168.47.128/26 host="ip-172-31-25-116" Jan 17 00:28:22.794924 containerd[1969]: 2026-01-17 00:28:22.657 [INFO][5354] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.128/26 host="ip-172-31-25-116" Jan 17 00:28:22.794924 containerd[1969]: 2026-01-17 00:28:22.660 [INFO][5354] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ip-172-31-25-116" Jan 17 00:28:22.794924 containerd[1969]: 2026-01-17 00:28:22.661 [INFO][5354] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.ba23bbc6d669a742a006541db3db1948e3301f4c65758f2c2d7d6488dcbd17fd" host="ip-172-31-25-116" Jan 17 00:28:22.794924 containerd[1969]: 2026-01-17 00:28:22.670 [INFO][5354] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ba23bbc6d669a742a006541db3db1948e3301f4c65758f2c2d7d6488dcbd17fd Jan 17 00:28:22.794924 containerd[1969]: 2026-01-17 00:28:22.681 [INFO][5354] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.ba23bbc6d669a742a006541db3db1948e3301f4c65758f2c2d7d6488dcbd17fd" host="ip-172-31-25-116" Jan 17 00:28:22.794924 containerd[1969]: 2026-01-17 00:28:22.698 [INFO][5354] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.47.134/26] block=192.168.47.128/26 handle="k8s-pod-network.ba23bbc6d669a742a006541db3db1948e3301f4c65758f2c2d7d6488dcbd17fd" host="ip-172-31-25-116" Jan 17 00:28:22.794924 containerd[1969]: 2026-01-17 00:28:22.699 [INFO][5354] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.134/26] handle="k8s-pod-network.ba23bbc6d669a742a006541db3db1948e3301f4c65758f2c2d7d6488dcbd17fd" host="ip-172-31-25-116" Jan 17 00:28:22.794924 containerd[1969]: 2026-01-17 00:28:22.699 [INFO][5354] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:28:22.794924 containerd[1969]: 2026-01-17 00:28:22.699 [INFO][5354] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.47.134/26] IPv6=[] ContainerID="ba23bbc6d669a742a006541db3db1948e3301f4c65758f2c2d7d6488dcbd17fd" HandleID="k8s-pod-network.ba23bbc6d669a742a006541db3db1948e3301f4c65758f2c2d7d6488dcbd17fd" Workload="ip--172--31--25--116-k8s-csi--node--driver--5p9mr-eth0" Jan 17 00:28:22.796298 containerd[1969]: 2026-01-17 00:28:22.706 [INFO][5331] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ba23bbc6d669a742a006541db3db1948e3301f4c65758f2c2d7d6488dcbd17fd" Namespace="calico-system" Pod="csi-node-driver-5p9mr" WorkloadEndpoint="ip--172--31--25--116-k8s-csi--node--driver--5p9mr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--116-k8s-csi--node--driver--5p9mr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-116", ContainerID:"", Pod:"csi-node-driver-5p9mr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib9c9f76ed17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:28:22.796298 containerd[1969]: 2026-01-17 00:28:22.707 [INFO][5331] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.134/32] ContainerID="ba23bbc6d669a742a006541db3db1948e3301f4c65758f2c2d7d6488dcbd17fd" Namespace="calico-system" Pod="csi-node-driver-5p9mr" WorkloadEndpoint="ip--172--31--25--116-k8s-csi--node--driver--5p9mr-eth0" Jan 17 00:28:22.796298 containerd[1969]: 2026-01-17 00:28:22.707 [INFO][5331] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9c9f76ed17 ContainerID="ba23bbc6d669a742a006541db3db1948e3301f4c65758f2c2d7d6488dcbd17fd" Namespace="calico-system" Pod="csi-node-driver-5p9mr" WorkloadEndpoint="ip--172--31--25--116-k8s-csi--node--driver--5p9mr-eth0" Jan 17 00:28:22.796298 containerd[1969]: 2026-01-17 00:28:22.721 [INFO][5331] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ba23bbc6d669a742a006541db3db1948e3301f4c65758f2c2d7d6488dcbd17fd" Namespace="calico-system" Pod="csi-node-driver-5p9mr" WorkloadEndpoint="ip--172--31--25--116-k8s-csi--node--driver--5p9mr-eth0" Jan 17 00:28:22.796298 containerd[1969]: 2026-01-17 00:28:22.724 [INFO][5331] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ba23bbc6d669a742a006541db3db1948e3301f4c65758f2c2d7d6488dcbd17fd" Namespace="calico-system" Pod="csi-node-driver-5p9mr" WorkloadEndpoint="ip--172--31--25--116-k8s-csi--node--driver--5p9mr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--116-k8s-csi--node--driver--5p9mr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-116", ContainerID:"ba23bbc6d669a742a006541db3db1948e3301f4c65758f2c2d7d6488dcbd17fd", Pod:"csi-node-driver-5p9mr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib9c9f76ed17", MAC:"ee:cf:86:70:68:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:28:22.796298 containerd[1969]: 2026-01-17 00:28:22.790 [INFO][5331] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ba23bbc6d669a742a006541db3db1948e3301f4c65758f2c2d7d6488dcbd17fd" Namespace="calico-system" Pod="csi-node-driver-5p9mr" WorkloadEndpoint="ip--172--31--25--116-k8s-csi--node--driver--5p9mr-eth0" Jan 17 00:28:22.832095 containerd[1969]: time="2026-01-17T00:28:22.831839827Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:28:22.832095 containerd[1969]: time="2026-01-17T00:28:22.831946851Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:28:22.832095 containerd[1969]: time="2026-01-17T00:28:22.831964831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:22.833793 containerd[1969]: time="2026-01-17T00:28:22.832534107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:22.876356 systemd[1]: Started cri-containerd-ba23bbc6d669a742a006541db3db1948e3301f4c65758f2c2d7d6488dcbd17fd.scope - libcontainer container ba23bbc6d669a742a006541db3db1948e3301f4c65758f2c2d7d6488dcbd17fd. Jan 17 00:28:22.939329 systemd-networkd[1897]: caliccf8a46670d: Link UP Jan 17 00:28:22.940917 systemd-networkd[1897]: caliccf8a46670d: Gained carrier Jan 17 00:28:22.961984 containerd[1969]: time="2026-01-17T00:28:22.961572447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5p9mr,Uid:c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9,Namespace:calico-system,Attempt:1,} returns sandbox id \"ba23bbc6d669a742a006541db3db1948e3301f4c65758f2c2d7d6488dcbd17fd\"" Jan 17 00:28:22.967064 containerd[1969]: time="2026-01-17T00:28:22.966921925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:28:22.979313 containerd[1969]: 2026-01-17 00:28:22.588 [INFO][5346] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--116-k8s-coredns--674b8bbfcf--wldcl-eth0 coredns-674b8bbfcf- kube-system dc3409d6-ff21-405a-b461-9d804b643b66 1002 0 2026-01-17 00:27:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-25-116 coredns-674b8bbfcf-wldcl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliccf8a46670d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="61471b0e83758809d181e3925cfe1ff1c9d18627cfa2a09423d7a9faeb8672c5" Namespace="kube-system" Pod="coredns-674b8bbfcf-wldcl" WorkloadEndpoint="ip--172--31--25--116-k8s-coredns--674b8bbfcf--wldcl-" Jan 17 00:28:22.979313 containerd[1969]: 2026-01-17 00:28:22.589 [INFO][5346] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="61471b0e83758809d181e3925cfe1ff1c9d18627cfa2a09423d7a9faeb8672c5" Namespace="kube-system" Pod="coredns-674b8bbfcf-wldcl" WorkloadEndpoint="ip--172--31--25--116-k8s-coredns--674b8bbfcf--wldcl-eth0" Jan 17 00:28:22.979313 containerd[1969]: 2026-01-17 00:28:22.646 [INFO][5364] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="61471b0e83758809d181e3925cfe1ff1c9d18627cfa2a09423d7a9faeb8672c5" HandleID="k8s-pod-network.61471b0e83758809d181e3925cfe1ff1c9d18627cfa2a09423d7a9faeb8672c5" Workload="ip--172--31--25--116-k8s-coredns--674b8bbfcf--wldcl-eth0" Jan 17 00:28:22.979313 containerd[1969]: 2026-01-17 00:28:22.646 [INFO][5364] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="61471b0e83758809d181e3925cfe1ff1c9d18627cfa2a09423d7a9faeb8672c5" HandleID="k8s-pod-network.61471b0e83758809d181e3925cfe1ff1c9d18627cfa2a09423d7a9faeb8672c5" Workload="ip--172--31--25--116-k8s-coredns--674b8bbfcf--wldcl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5070), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-25-116", "pod":"coredns-674b8bbfcf-wldcl", "timestamp":"2026-01-17 00:28:22.646674643 +0000 UTC"}, Hostname:"ip-172-31-25-116", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:28:22.979313 containerd[1969]: 2026-01-17 00:28:22.647 [INFO][5364] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:28:22.979313 containerd[1969]: 2026-01-17 00:28:22.699 [INFO][5364] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:28:22.979313 containerd[1969]: 2026-01-17 00:28:22.699 [INFO][5364] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-116' Jan 17 00:28:22.979313 containerd[1969]: 2026-01-17 00:28:22.739 [INFO][5364] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.61471b0e83758809d181e3925cfe1ff1c9d18627cfa2a09423d7a9faeb8672c5" host="ip-172-31-25-116" Jan 17 00:28:22.979313 containerd[1969]: 2026-01-17 00:28:22.810 [INFO][5364] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-116" Jan 17 00:28:22.979313 containerd[1969]: 2026-01-17 00:28:22.835 [INFO][5364] ipam/ipam.go 511: Trying affinity for 192.168.47.128/26 host="ip-172-31-25-116" Jan 17 00:28:22.979313 containerd[1969]: 2026-01-17 00:28:22.840 [INFO][5364] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.128/26 host="ip-172-31-25-116" Jan 17 00:28:22.979313 containerd[1969]: 2026-01-17 00:28:22.865 [INFO][5364] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ip-172-31-25-116" Jan 17 00:28:22.979313 containerd[1969]: 2026-01-17 00:28:22.865 [INFO][5364] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.61471b0e83758809d181e3925cfe1ff1c9d18627cfa2a09423d7a9faeb8672c5" host="ip-172-31-25-116" Jan 17 00:28:22.979313 containerd[1969]: 2026-01-17 00:28:22.878 [INFO][5364] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.61471b0e83758809d181e3925cfe1ff1c9d18627cfa2a09423d7a9faeb8672c5 Jan 17 00:28:22.979313 containerd[1969]: 2026-01-17 00:28:22.907 [INFO][5364] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.61471b0e83758809d181e3925cfe1ff1c9d18627cfa2a09423d7a9faeb8672c5" host="ip-172-31-25-116" Jan 17 00:28:22.979313 containerd[1969]: 2026-01-17 00:28:22.923 [INFO][5364] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.47.135/26] block=192.168.47.128/26 handle="k8s-pod-network.61471b0e83758809d181e3925cfe1ff1c9d18627cfa2a09423d7a9faeb8672c5" host="ip-172-31-25-116" Jan 17 00:28:22.979313 containerd[1969]: 2026-01-17 00:28:22.924 [INFO][5364] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.135/26] handle="k8s-pod-network.61471b0e83758809d181e3925cfe1ff1c9d18627cfa2a09423d7a9faeb8672c5" host="ip-172-31-25-116" Jan 17 00:28:22.979313 containerd[1969]: 2026-01-17 00:28:22.924 [INFO][5364] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:28:22.979313 containerd[1969]: 2026-01-17 00:28:22.924 [INFO][5364] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.47.135/26] IPv6=[] ContainerID="61471b0e83758809d181e3925cfe1ff1c9d18627cfa2a09423d7a9faeb8672c5" HandleID="k8s-pod-network.61471b0e83758809d181e3925cfe1ff1c9d18627cfa2a09423d7a9faeb8672c5" Workload="ip--172--31--25--116-k8s-coredns--674b8bbfcf--wldcl-eth0" Jan 17 00:28:22.982680 containerd[1969]: 2026-01-17 00:28:22.928 [INFO][5346] cni-plugin/k8s.go 418: Populated endpoint ContainerID="61471b0e83758809d181e3925cfe1ff1c9d18627cfa2a09423d7a9faeb8672c5" Namespace="kube-system" Pod="coredns-674b8bbfcf-wldcl" WorkloadEndpoint="ip--172--31--25--116-k8s-coredns--674b8bbfcf--wldcl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--116-k8s-coredns--674b8bbfcf--wldcl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"dc3409d6-ff21-405a-b461-9d804b643b66", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 27, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-116", ContainerID:"", Pod:"coredns-674b8bbfcf-wldcl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliccf8a46670d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:28:22.982680 containerd[1969]: 2026-01-17 00:28:22.931 [INFO][5346] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.135/32] ContainerID="61471b0e83758809d181e3925cfe1ff1c9d18627cfa2a09423d7a9faeb8672c5" Namespace="kube-system" Pod="coredns-674b8bbfcf-wldcl" WorkloadEndpoint="ip--172--31--25--116-k8s-coredns--674b8bbfcf--wldcl-eth0" Jan 17 00:28:22.982680 containerd[1969]: 2026-01-17 00:28:22.932 [INFO][5346] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliccf8a46670d ContainerID="61471b0e83758809d181e3925cfe1ff1c9d18627cfa2a09423d7a9faeb8672c5" Namespace="kube-system" Pod="coredns-674b8bbfcf-wldcl" WorkloadEndpoint="ip--172--31--25--116-k8s-coredns--674b8bbfcf--wldcl-eth0" Jan 17 00:28:22.982680 containerd[1969]: 2026-01-17 00:28:22.941 [INFO][5346] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="61471b0e83758809d181e3925cfe1ff1c9d18627cfa2a09423d7a9faeb8672c5" Namespace="kube-system" Pod="coredns-674b8bbfcf-wldcl" WorkloadEndpoint="ip--172--31--25--116-k8s-coredns--674b8bbfcf--wldcl-eth0" Jan 17 00:28:22.982680 containerd[1969]: 2026-01-17 00:28:22.942 [INFO][5346] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="61471b0e83758809d181e3925cfe1ff1c9d18627cfa2a09423d7a9faeb8672c5" Namespace="kube-system" Pod="coredns-674b8bbfcf-wldcl" WorkloadEndpoint="ip--172--31--25--116-k8s-coredns--674b8bbfcf--wldcl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--116-k8s-coredns--674b8bbfcf--wldcl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"dc3409d6-ff21-405a-b461-9d804b643b66", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 27, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-116", ContainerID:"61471b0e83758809d181e3925cfe1ff1c9d18627cfa2a09423d7a9faeb8672c5", Pod:"coredns-674b8bbfcf-wldcl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliccf8a46670d", MAC:"1a:99:dc:ac:0e:58", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:28:22.982680 containerd[1969]: 2026-01-17 00:28:22.975 [INFO][5346] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="61471b0e83758809d181e3925cfe1ff1c9d18627cfa2a09423d7a9faeb8672c5" Namespace="kube-system" Pod="coredns-674b8bbfcf-wldcl" WorkloadEndpoint="ip--172--31--25--116-k8s-coredns--674b8bbfcf--wldcl-eth0" Jan 17 00:28:23.019119 containerd[1969]: time="2026-01-17T00:28:23.018908167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:28:23.019315 containerd[1969]: time="2026-01-17T00:28:23.019059240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:28:23.019315 containerd[1969]: time="2026-01-17T00:28:23.019129755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:23.022854 containerd[1969]: time="2026-01-17T00:28:23.022042263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:23.050977 systemd[1]: Started cri-containerd-61471b0e83758809d181e3925cfe1ff1c9d18627cfa2a09423d7a9faeb8672c5.scope - libcontainer container 61471b0e83758809d181e3925cfe1ff1c9d18627cfa2a09423d7a9faeb8672c5. Jan 17 00:28:23.100639 containerd[1969]: time="2026-01-17T00:28:23.100590430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wldcl,Uid:dc3409d6-ff21-405a-b461-9d804b643b66,Namespace:kube-system,Attempt:1,} returns sandbox id \"61471b0e83758809d181e3925cfe1ff1c9d18627cfa2a09423d7a9faeb8672c5\"" Jan 17 00:28:23.110183 containerd[1969]: time="2026-01-17T00:28:23.110144239Z" level=info msg="CreateContainer within sandbox \"61471b0e83758809d181e3925cfe1ff1c9d18627cfa2a09423d7a9faeb8672c5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:28:23.131952 containerd[1969]: time="2026-01-17T00:28:23.131898268Z" level=info msg="CreateContainer within sandbox \"61471b0e83758809d181e3925cfe1ff1c9d18627cfa2a09423d7a9faeb8672c5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e767ccef64e7020c9c29336fd370ac8ceec85b2fcd02eed401d148bbeae84bda\"" Jan 17 00:28:23.132709 containerd[1969]: time="2026-01-17T00:28:23.132676721Z" level=info msg="StartContainer for \"e767ccef64e7020c9c29336fd370ac8ceec85b2fcd02eed401d148bbeae84bda\"" Jan 17 00:28:23.164039 systemd[1]: Started cri-containerd-e767ccef64e7020c9c29336fd370ac8ceec85b2fcd02eed401d148bbeae84bda.scope - libcontainer container e767ccef64e7020c9c29336fd370ac8ceec85b2fcd02eed401d148bbeae84bda. Jan 17 00:28:23.198284 containerd[1969]: time="2026-01-17T00:28:23.198239758Z" level=info msg="StartContainer for \"e767ccef64e7020c9c29336fd370ac8ceec85b2fcd02eed401d148bbeae84bda\" returns successfully" Jan 17 00:28:23.235744 containerd[1969]: time="2026-01-17T00:28:23.235695750Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:28:23.254966 containerd[1969]: time="2026-01-17T00:28:23.254897384Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:28:23.255651 containerd[1969]: time="2026-01-17T00:28:23.254938282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:28:23.255798 kubelet[3181]: E0117 00:28:23.255198 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:28:23.255798 kubelet[3181]: E0117 00:28:23.255252 3181 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:28:23.255798 kubelet[3181]: E0117 00:28:23.255439 3181 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6sg4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5p9mr_calico-system(c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:28:23.257757 containerd[1969]: time="2026-01-17T00:28:23.257709228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:28:23.269267 containerd[1969]: time="2026-01-17T00:28:23.269226629Z" level=info msg="StopPodSandbox for \"132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c\"" Jan 17 00:28:23.315935 systemd-networkd[1897]: cali7f3bd27d291: Gained IPv6LL Jan 17 00:28:23.381838 containerd[1969]: 2026-01-17 00:28:23.342 [INFO][5525] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" Jan 17 00:28:23.381838 containerd[1969]: 2026-01-17 00:28:23.342 [INFO][5525] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" iface="eth0" netns="/var/run/netns/cni-7e394578-167b-ad7e-c109-3330382dc1e4" Jan 17 00:28:23.381838 containerd[1969]: 2026-01-17 00:28:23.342 [INFO][5525] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" iface="eth0" netns="/var/run/netns/cni-7e394578-167b-ad7e-c109-3330382dc1e4" Jan 17 00:28:23.381838 containerd[1969]: 2026-01-17 00:28:23.343 [INFO][5525] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" iface="eth0" netns="/var/run/netns/cni-7e394578-167b-ad7e-c109-3330382dc1e4" Jan 17 00:28:23.381838 containerd[1969]: 2026-01-17 00:28:23.343 [INFO][5525] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" Jan 17 00:28:23.381838 containerd[1969]: 2026-01-17 00:28:23.343 [INFO][5525] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" Jan 17 00:28:23.381838 containerd[1969]: 2026-01-17 00:28:23.367 [INFO][5533] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" HandleID="k8s-pod-network.132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" Workload="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--k9mxk-eth0" Jan 17 00:28:23.381838 containerd[1969]: 2026-01-17 00:28:23.367 [INFO][5533] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:28:23.381838 containerd[1969]: 2026-01-17 00:28:23.368 [INFO][5533] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:28:23.381838 containerd[1969]: 2026-01-17 00:28:23.375 [WARNING][5533] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" HandleID="k8s-pod-network.132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" Workload="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--k9mxk-eth0" Jan 17 00:28:23.381838 containerd[1969]: 2026-01-17 00:28:23.375 [INFO][5533] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" HandleID="k8s-pod-network.132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" Workload="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--k9mxk-eth0" Jan 17 00:28:23.381838 containerd[1969]: 2026-01-17 00:28:23.377 [INFO][5533] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:28:23.381838 containerd[1969]: 2026-01-17 00:28:23.379 [INFO][5525] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" Jan 17 00:28:23.383826 containerd[1969]: time="2026-01-17T00:28:23.381939684Z" level=info msg="TearDown network for sandbox \"132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c\" successfully" Jan 17 00:28:23.383826 containerd[1969]: time="2026-01-17T00:28:23.381964321Z" level=info msg="StopPodSandbox for \"132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c\" returns successfully" Jan 17 00:28:23.383826 containerd[1969]: time="2026-01-17T00:28:23.382640353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6747446b5-k9mxk,Uid:0b97d5ab-19c5-4717-a6ca-1a7a01547f6c,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:28:23.426648 systemd[1]: run-netns-cni\x2d7e394578\x2d167b\x2dad7e\x2dc109\x2d3330382dc1e4.mount: Deactivated successfully. Jan 17 00:28:23.508669 systemd-networkd[1897]: cali3a52d1adafb: Gained IPv6LL Jan 17 00:28:23.548685 containerd[1969]: time="2026-01-17T00:28:23.548637258Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:28:23.551299 containerd[1969]: time="2026-01-17T00:28:23.551005730Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:28:23.551517 containerd[1969]: time="2026-01-17T00:28:23.551190430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:28:23.551656 kubelet[3181]: E0117 00:28:23.551616 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:28:23.552038 kubelet[3181]: E0117 00:28:23.551667 3181 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:28:23.552038 kubelet[3181]: E0117 00:28:23.551792 3181 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6sg4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5p9mr_calico-system(c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:28:23.553165 kubelet[3181]: E0117 00:28:23.553103 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5p9mr" podUID="c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9" Jan 17 00:28:23.568599 systemd-networkd[1897]: calic6340591d54: Link UP Jan 17 00:28:23.572595 systemd-networkd[1897]: calic6340591d54: Gained carrier Jan 17 00:28:23.573730 systemd-networkd[1897]: cali9f3dfb8abd9: Gained IPv6LL Jan 17 00:28:23.595273 containerd[1969]: 2026-01-17 00:28:23.463 [INFO][5539] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--116-k8s-calico--apiserver--6747446b5--k9mxk-eth0 calico-apiserver-6747446b5- calico-apiserver 0b97d5ab-19c5-4717-a6ca-1a7a01547f6c 1037 0 2026-01-17 00:27:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6747446b5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-25-116 calico-apiserver-6747446b5-k9mxk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic6340591d54 [] [] }} ContainerID="86de9774e0201c6502a41f656335f915265f2534d55a209acd5f11060f9dff43" Namespace="calico-apiserver" Pod="calico-apiserver-6747446b5-k9mxk" WorkloadEndpoint="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--k9mxk-" Jan 17 00:28:23.595273 containerd[1969]: 2026-01-17 00:28:23.463 [INFO][5539] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="86de9774e0201c6502a41f656335f915265f2534d55a209acd5f11060f9dff43" Namespace="calico-apiserver" Pod="calico-apiserver-6747446b5-k9mxk" WorkloadEndpoint="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--k9mxk-eth0" Jan 17 00:28:23.595273 containerd[1969]: 2026-01-17 00:28:23.518 [INFO][5551] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86de9774e0201c6502a41f656335f915265f2534d55a209acd5f11060f9dff43" HandleID="k8s-pod-network.86de9774e0201c6502a41f656335f915265f2534d55a209acd5f11060f9dff43" Workload="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--k9mxk-eth0" Jan 17 00:28:23.595273 containerd[1969]: 2026-01-17 00:28:23.518 [INFO][5551] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="86de9774e0201c6502a41f656335f915265f2534d55a209acd5f11060f9dff43" HandleID="k8s-pod-network.86de9774e0201c6502a41f656335f915265f2534d55a209acd5f11060f9dff43" Workload="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--k9mxk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f6f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-25-116", "pod":"calico-apiserver-6747446b5-k9mxk", "timestamp":"2026-01-17 00:28:23.518200799 +0000 UTC"}, Hostname:"ip-172-31-25-116", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:28:23.595273 containerd[1969]: 2026-01-17 00:28:23.518 [INFO][5551] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:28:23.595273 containerd[1969]: 2026-01-17 00:28:23.518 [INFO][5551] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:28:23.595273 containerd[1969]: 2026-01-17 00:28:23.518 [INFO][5551] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-116' Jan 17 00:28:23.595273 containerd[1969]: 2026-01-17 00:28:23.525 [INFO][5551] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.86de9774e0201c6502a41f656335f915265f2534d55a209acd5f11060f9dff43" host="ip-172-31-25-116" Jan 17 00:28:23.595273 containerd[1969]: 2026-01-17 00:28:23.530 [INFO][5551] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-25-116" Jan 17 00:28:23.595273 containerd[1969]: 2026-01-17 00:28:23.536 [INFO][5551] ipam/ipam.go 511: Trying affinity for 192.168.47.128/26 host="ip-172-31-25-116" Jan 17 00:28:23.595273 containerd[1969]: 2026-01-17 00:28:23.538 [INFO][5551] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.128/26 host="ip-172-31-25-116" Jan 17 00:28:23.595273 containerd[1969]: 2026-01-17 00:28:23.541 [INFO][5551] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ip-172-31-25-116" Jan 17 00:28:23.595273 containerd[1969]: 2026-01-17 00:28:23.541 [INFO][5551] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.86de9774e0201c6502a41f656335f915265f2534d55a209acd5f11060f9dff43" host="ip-172-31-25-116" Jan 17 00:28:23.595273 containerd[1969]: 2026-01-17 00:28:23.542 [INFO][5551] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.86de9774e0201c6502a41f656335f915265f2534d55a209acd5f11060f9dff43 Jan 17 00:28:23.595273 containerd[1969]: 2026-01-17 00:28:23.547 [INFO][5551] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.86de9774e0201c6502a41f656335f915265f2534d55a209acd5f11060f9dff43" host="ip-172-31-25-116" Jan 17 00:28:23.595273 containerd[1969]: 2026-01-17 00:28:23.558 [INFO][5551] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.47.136/26] block=192.168.47.128/26 handle="k8s-pod-network.86de9774e0201c6502a41f656335f915265f2534d55a209acd5f11060f9dff43" host="ip-172-31-25-116" Jan 17 00:28:23.595273 containerd[1969]: 2026-01-17 00:28:23.558 [INFO][5551] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.136/26] handle="k8s-pod-network.86de9774e0201c6502a41f656335f915265f2534d55a209acd5f11060f9dff43" host="ip-172-31-25-116" Jan 17 00:28:23.595273 containerd[1969]: 2026-01-17 00:28:23.558 [INFO][5551] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:28:23.595273 containerd[1969]: 2026-01-17 00:28:23.558 [INFO][5551] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.47.136/26] IPv6=[] ContainerID="86de9774e0201c6502a41f656335f915265f2534d55a209acd5f11060f9dff43" HandleID="k8s-pod-network.86de9774e0201c6502a41f656335f915265f2534d55a209acd5f11060f9dff43" Workload="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--k9mxk-eth0" Jan 17 00:28:23.596820 containerd[1969]: 2026-01-17 00:28:23.562 [INFO][5539] cni-plugin/k8s.go 418: Populated endpoint ContainerID="86de9774e0201c6502a41f656335f915265f2534d55a209acd5f11060f9dff43" Namespace="calico-apiserver" Pod="calico-apiserver-6747446b5-k9mxk" WorkloadEndpoint="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--k9mxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--116-k8s-calico--apiserver--6747446b5--k9mxk-eth0", GenerateName:"calico-apiserver-6747446b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b97d5ab-19c5-4717-a6ca-1a7a01547f6c", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 27, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6747446b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-116", ContainerID:"", Pod:"calico-apiserver-6747446b5-k9mxk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic6340591d54", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:28:23.596820 containerd[1969]: 2026-01-17 00:28:23.562 [INFO][5539] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.136/32] ContainerID="86de9774e0201c6502a41f656335f915265f2534d55a209acd5f11060f9dff43" Namespace="calico-apiserver" Pod="calico-apiserver-6747446b5-k9mxk" WorkloadEndpoint="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--k9mxk-eth0" Jan 17 00:28:23.596820 containerd[1969]: 2026-01-17 00:28:23.562 [INFO][5539] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic6340591d54 ContainerID="86de9774e0201c6502a41f656335f915265f2534d55a209acd5f11060f9dff43" Namespace="calico-apiserver" Pod="calico-apiserver-6747446b5-k9mxk" WorkloadEndpoint="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--k9mxk-eth0" Jan 17 00:28:23.596820 containerd[1969]: 2026-01-17 00:28:23.576 [INFO][5539] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86de9774e0201c6502a41f656335f915265f2534d55a209acd5f11060f9dff43" Namespace="calico-apiserver" Pod="calico-apiserver-6747446b5-k9mxk" WorkloadEndpoint="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--k9mxk-eth0" Jan 17 00:28:23.596820 containerd[1969]: 2026-01-17 00:28:23.577 [INFO][5539] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="86de9774e0201c6502a41f656335f915265f2534d55a209acd5f11060f9dff43" Namespace="calico-apiserver" Pod="calico-apiserver-6747446b5-k9mxk" WorkloadEndpoint="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--k9mxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--116-k8s-calico--apiserver--6747446b5--k9mxk-eth0", GenerateName:"calico-apiserver-6747446b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b97d5ab-19c5-4717-a6ca-1a7a01547f6c", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 27, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6747446b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-116", ContainerID:"86de9774e0201c6502a41f656335f915265f2534d55a209acd5f11060f9dff43", Pod:"calico-apiserver-6747446b5-k9mxk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic6340591d54", MAC:"b6:8e:fc:5b:aa:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:28:23.596820 containerd[1969]: 2026-01-17 00:28:23.591 [INFO][5539] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="86de9774e0201c6502a41f656335f915265f2534d55a209acd5f11060f9dff43" Namespace="calico-apiserver" Pod="calico-apiserver-6747446b5-k9mxk" WorkloadEndpoint="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--k9mxk-eth0" Jan 17 00:28:23.621922 containerd[1969]: time="2026-01-17T00:28:23.621792340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:28:23.621922 containerd[1969]: time="2026-01-17T00:28:23.621865222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:28:23.621922 containerd[1969]: time="2026-01-17T00:28:23.621887113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:23.622263 containerd[1969]: time="2026-01-17T00:28:23.622021659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:23.656142 systemd[1]: run-containerd-runc-k8s.io-86de9774e0201c6502a41f656335f915265f2534d55a209acd5f11060f9dff43-runc.GAuLZi.mount: Deactivated successfully. Jan 17 00:28:23.666982 systemd[1]: Started cri-containerd-86de9774e0201c6502a41f656335f915265f2534d55a209acd5f11060f9dff43.scope - libcontainer container 86de9774e0201c6502a41f656335f915265f2534d55a209acd5f11060f9dff43. Jan 17 00:28:23.686988 kubelet[3181]: E0117 00:28:23.686906 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5p9mr" podUID="c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9" Jan 17 00:28:23.694784 kubelet[3181]: E0117 00:28:23.694161 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6bww6" podUID="394e468b-e5d2-4096-94d5-a6a60d966235" Jan 17 00:28:23.697618 kubelet[3181]: E0117 00:28:23.697566 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-877bf5958-fmwqm" podUID="5fd74b61-87d1-45e4-b949-57645e5eb510" Jan 17 00:28:23.729696 kubelet[3181]: I0117 00:28:23.728930 3181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wldcl" podStartSLOduration=42.728913678 podStartE2EDuration="42.728913678s" podCreationTimestamp="2026-01-17 00:27:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:28:23.727532289 +0000 UTC m=+47.749423154" watchObservedRunningTime="2026-01-17 00:28:23.728913678 +0000 UTC m=+47.750804550" Jan 17 00:28:23.734875 containerd[1969]: time="2026-01-17T00:28:23.734746265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6747446b5-k9mxk,Uid:0b97d5ab-19c5-4717-a6ca-1a7a01547f6c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"86de9774e0201c6502a41f656335f915265f2534d55a209acd5f11060f9dff43\"" Jan 17 00:28:23.742334 containerd[1969]: time="2026-01-17T00:28:23.742122735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:28:23.767312 systemd-networkd[1897]: calib9c9f76ed17: Gained IPv6LL Jan 17 00:28:23.999784 containerd[1969]: time="2026-01-17T00:28:23.999522425Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:28:24.001852 containerd[1969]: time="2026-01-17T00:28:24.001797881Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:28:24.001985 containerd[1969]: time="2026-01-17T00:28:24.001873213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:28:24.002765 kubelet[3181]: E0117 00:28:24.002181 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:28:24.002765 kubelet[3181]: E0117 00:28:24.002229 3181 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:28:24.002765 kubelet[3181]: E0117 00:28:24.002355 3181 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n6f9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6747446b5-k9mxk_calico-apiserver(0b97d5ab-19c5-4717-a6ca-1a7a01547f6c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:28:24.004219 kubelet[3181]: E0117 00:28:24.004164 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6747446b5-k9mxk" podUID="0b97d5ab-19c5-4717-a6ca-1a7a01547f6c" Jan 17 00:28:24.659935 systemd-networkd[1897]: caliccf8a46670d: Gained IPv6LL Jan 17 00:28:24.694458 kubelet[3181]: E0117 00:28:24.694234 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5p9mr" podUID="c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9" Jan 17 00:28:24.696274 kubelet[3181]: E0117 00:28:24.696050 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6747446b5-k9mxk" podUID="0b97d5ab-19c5-4717-a6ca-1a7a01547f6c" Jan 17 00:28:25.044088 systemd-networkd[1897]: calic6340591d54: Gained IPv6LL Jan 17 00:28:25.695724 kubelet[3181]: E0117 00:28:25.695658 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6747446b5-k9mxk" podUID="0b97d5ab-19c5-4717-a6ca-1a7a01547f6c" Jan 17 00:28:27.427185 systemd[1]: Started sshd@7-172.31.25.116:22-4.153.228.146:57254.service - OpenSSH per-connection server daemon (4.153.228.146:57254). Jan 17 00:28:27.738061 kubelet[3181]: I0117 00:28:27.738026 3181 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:28:27.955713 ntpd[1947]: Listen normally on 7 vxlan.calico 192.168.47.128:123 Jan 17 00:28:27.958354 ntpd[1947]: 17 Jan 00:28:27 ntpd[1947]: Listen normally on 7 vxlan.calico 192.168.47.128:123 Jan 17 00:28:27.958354 ntpd[1947]: 17 Jan 00:28:27 ntpd[1947]: Listen normally on 8 calid910dbead72 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 17 00:28:27.958354 ntpd[1947]: 17 Jan 00:28:27 ntpd[1947]: Listen normally on 9 vxlan.calico [fe80::6486:89ff:fef9:bc60%5]:123 Jan 17 00:28:27.958354 ntpd[1947]: 17 Jan 00:28:27 ntpd[1947]: Listen normally on 10 cali4302629f2a8 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 17 00:28:27.958354 ntpd[1947]: 17 Jan 00:28:27 ntpd[1947]: Listen normally on 11 cali9f3dfb8abd9 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 17 00:28:27.958354 ntpd[1947]: 17 Jan 00:28:27 ntpd[1947]: Listen normally on 12 cali7f3bd27d291 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 17 00:28:27.958354 ntpd[1947]: 17 Jan 00:28:27 ntpd[1947]: Listen normally on 13 cali3a52d1adafb [fe80::ecee:eeff:feee:eeee%11]:123 Jan 17 00:28:27.958354 ntpd[1947]: 17 Jan 00:28:27 ntpd[1947]: Listen normally on 14 calib9c9f76ed17 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 17 00:28:27.958354 ntpd[1947]: 17 Jan 00:28:27 ntpd[1947]: Listen normally on 15 caliccf8a46670d [fe80::ecee:eeff:feee:eeee%13]:123 Jan 17 00:28:27.958354 ntpd[1947]: 17 Jan 00:28:27 ntpd[1947]: Listen normally on 16 calic6340591d54 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 17 00:28:27.955803 ntpd[1947]: Listen normally on 8 calid910dbead72 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 17 00:28:27.955847 ntpd[1947]: Listen normally on 9 vxlan.calico [fe80::6486:89ff:fef9:bc60%5]:123 Jan 17 00:28:27.955875 ntpd[1947]: Listen normally on 10 cali4302629f2a8 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 17 00:28:27.955907 ntpd[1947]: Listen normally on 11 cali9f3dfb8abd9 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 17 00:28:27.955933 ntpd[1947]: Listen normally on 12 cali7f3bd27d291 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 17 00:28:27.955960 ntpd[1947]: Listen normally on 13 cali3a52d1adafb [fe80::ecee:eeff:feee:eeee%11]:123 Jan 17 00:28:27.955987 ntpd[1947]: Listen normally on 14 calib9c9f76ed17 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 17 00:28:27.962893 sshd[5627]: Accepted publickey for core from 4.153.228.146 port 57254 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:28:27.956013 ntpd[1947]: Listen normally on 15 caliccf8a46670d [fe80::ecee:eeff:feee:eeee%13]:123 Jan 17 00:28:27.956039 ntpd[1947]: Listen normally on 16 calic6340591d54 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 17 00:28:27.967209 sshd[5627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:27.982553 systemd-logind[1955]: New session 8 of user core. Jan 17 00:28:27.987006 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:28:29.069652 sshd[5627]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:29.073575 systemd[1]: sshd@7-172.31.25.116:22-4.153.228.146:57254.service: Deactivated successfully. Jan 17 00:28:29.076103 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:28:29.078140 systemd-logind[1955]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:28:29.079468 systemd-logind[1955]: Removed session 8. Jan 17 00:28:31.277183 containerd[1969]: time="2026-01-17T00:28:31.276291913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:28:31.563184 containerd[1969]: time="2026-01-17T00:28:31.562936474Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:28:31.565158 containerd[1969]: time="2026-01-17T00:28:31.565097397Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:28:31.565311 containerd[1969]: time="2026-01-17T00:28:31.565186821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:28:31.565386 kubelet[3181]: E0117 00:28:31.565344 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:28:31.565703 kubelet[3181]: E0117 00:28:31.565396 3181 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:28:31.565703 kubelet[3181]: E0117 00:28:31.565517 3181 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:04b6a3f3f16b4f078d52b8b865750bfc,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fxcqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6656dcccd5-pnsfx_calico-system(76314834-804d-441c-ad9c-ab52475d9d5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:28:31.568343 containerd[1969]: time="2026-01-17T00:28:31.568237642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:28:31.831834 containerd[1969]: time="2026-01-17T00:28:31.831687988Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:28:31.833831 containerd[1969]: time="2026-01-17T00:28:31.833773058Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:28:31.833952 containerd[1969]: time="2026-01-17T00:28:31.833855074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:28:31.834017 kubelet[3181]: E0117 00:28:31.833982 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:28:31.834077 kubelet[3181]: E0117 00:28:31.834030 3181 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:28:31.834194 kubelet[3181]: E0117 00:28:31.834144 3181 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fxcqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6656dcccd5-pnsfx_calico-system(76314834-804d-441c-ad9c-ab52475d9d5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:28:31.835665 kubelet[3181]: E0117 00:28:31.835618 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6656dcccd5-pnsfx" podUID="76314834-804d-441c-ad9c-ab52475d9d5c" Jan 17 00:28:34.175230 systemd[1]: Started sshd@8-172.31.25.116:22-4.153.228.146:57256.service - OpenSSH per-connection server daemon (4.153.228.146:57256). Jan 17 00:28:34.268944 containerd[1969]: time="2026-01-17T00:28:34.268900198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:28:34.512778 containerd[1969]: time="2026-01-17T00:28:34.512716980Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:28:34.515271 containerd[1969]: time="2026-01-17T00:28:34.515182031Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:28:34.515404 containerd[1969]: time="2026-01-17T00:28:34.515235694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:28:34.515555 kubelet[3181]: E0117 00:28:34.515508 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:28:34.516100 kubelet[3181]: E0117 00:28:34.515568 3181 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:28:34.516100 kubelet[3181]: E0117 00:28:34.515711 3181 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zc8vm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-6bww6_calico-system(394e468b-e5d2-4096-94d5-a6a60d966235): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:28:34.517452 kubelet[3181]: E0117 00:28:34.517416 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6bww6" podUID="394e468b-e5d2-4096-94d5-a6a60d966235" Jan 17 00:28:34.722121 sshd[5695]: Accepted publickey for core from 4.153.228.146 port 57256 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:28:34.723716 sshd[5695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:34.729053 systemd-logind[1955]: New session 9 of user core. Jan 17 00:28:34.737004 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:28:35.222976 sshd[5695]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:35.228860 systemd[1]: sshd@8-172.31.25.116:22-4.153.228.146:57256.service: Deactivated successfully. Jan 17 00:28:35.231253 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:28:35.232994 systemd-logind[1955]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:28:35.235221 systemd-logind[1955]: Removed session 9. Jan 17 00:28:36.236658 containerd[1969]: time="2026-01-17T00:28:36.236615527Z" level=info msg="StopPodSandbox for \"82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402\"" Jan 17 00:28:36.343511 containerd[1969]: 2026-01-17 00:28:36.298 [WARNING][5716] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" WorkloadEndpoint="ip--172--31--25--116-k8s-whisker--68745d549--cbm5w-eth0" Jan 17 00:28:36.343511 containerd[1969]: 2026-01-17 00:28:36.298 [INFO][5716] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" Jan 17 00:28:36.343511 containerd[1969]: 2026-01-17 00:28:36.298 [INFO][5716] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" iface="eth0" netns="" Jan 17 00:28:36.343511 containerd[1969]: 2026-01-17 00:28:36.298 [INFO][5716] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" Jan 17 00:28:36.343511 containerd[1969]: 2026-01-17 00:28:36.298 [INFO][5716] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" Jan 17 00:28:36.343511 containerd[1969]: 2026-01-17 00:28:36.323 [INFO][5725] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" HandleID="k8s-pod-network.82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" Workload="ip--172--31--25--116-k8s-whisker--68745d549--cbm5w-eth0" Jan 17 00:28:36.343511 containerd[1969]: 2026-01-17 00:28:36.323 [INFO][5725] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:28:36.343511 containerd[1969]: 2026-01-17 00:28:36.323 [INFO][5725] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:28:36.343511 containerd[1969]: 2026-01-17 00:28:36.334 [WARNING][5725] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" HandleID="k8s-pod-network.82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" Workload="ip--172--31--25--116-k8s-whisker--68745d549--cbm5w-eth0" Jan 17 00:28:36.343511 containerd[1969]: 2026-01-17 00:28:36.334 [INFO][5725] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" HandleID="k8s-pod-network.82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" Workload="ip--172--31--25--116-k8s-whisker--68745d549--cbm5w-eth0" Jan 17 00:28:36.343511 containerd[1969]: 2026-01-17 00:28:36.337 [INFO][5725] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:28:36.343511 containerd[1969]: 2026-01-17 00:28:36.340 [INFO][5716] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" Jan 17 00:28:36.343511 containerd[1969]: time="2026-01-17T00:28:36.343382594Z" level=info msg="TearDown network for sandbox \"82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402\" successfully" Jan 17 00:28:36.343511 containerd[1969]: time="2026-01-17T00:28:36.343414713Z" level=info msg="StopPodSandbox for \"82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402\" returns successfully" Jan 17 00:28:36.344301 containerd[1969]: time="2026-01-17T00:28:36.344045390Z" level=info msg="RemovePodSandbox for \"82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402\"" Jan 17 00:28:36.344301 containerd[1969]: time="2026-01-17T00:28:36.344091571Z" level=info msg="Forcibly stopping sandbox \"82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402\"" Jan 17 00:28:36.427810 containerd[1969]: 2026-01-17 00:28:36.386 [WARNING][5739] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" WorkloadEndpoint="ip--172--31--25--116-k8s-whisker--68745d549--cbm5w-eth0" Jan 17 00:28:36.427810 containerd[1969]: 2026-01-17 00:28:36.387 [INFO][5739] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" Jan 17 00:28:36.427810 containerd[1969]: 2026-01-17 00:28:36.387 [INFO][5739] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" iface="eth0" netns="" Jan 17 00:28:36.427810 containerd[1969]: 2026-01-17 00:28:36.387 [INFO][5739] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" Jan 17 00:28:36.427810 containerd[1969]: 2026-01-17 00:28:36.387 [INFO][5739] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" Jan 17 00:28:36.427810 containerd[1969]: 2026-01-17 00:28:36.414 [INFO][5746] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" HandleID="k8s-pod-network.82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" Workload="ip--172--31--25--116-k8s-whisker--68745d549--cbm5w-eth0" Jan 17 00:28:36.427810 containerd[1969]: 2026-01-17 00:28:36.414 [INFO][5746] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:28:36.427810 containerd[1969]: 2026-01-17 00:28:36.414 [INFO][5746] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:28:36.427810 containerd[1969]: 2026-01-17 00:28:36.421 [WARNING][5746] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" HandleID="k8s-pod-network.82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" Workload="ip--172--31--25--116-k8s-whisker--68745d549--cbm5w-eth0" Jan 17 00:28:36.427810 containerd[1969]: 2026-01-17 00:28:36.421 [INFO][5746] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" HandleID="k8s-pod-network.82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" Workload="ip--172--31--25--116-k8s-whisker--68745d549--cbm5w-eth0" Jan 17 00:28:36.427810 containerd[1969]: 2026-01-17 00:28:36.423 [INFO][5746] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:28:36.427810 containerd[1969]: 2026-01-17 00:28:36.425 [INFO][5739] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402" Jan 17 00:28:36.428249 containerd[1969]: time="2026-01-17T00:28:36.427914713Z" level=info msg="TearDown network for sandbox \"82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402\" successfully" Jan 17 00:28:36.440166 containerd[1969]: time="2026-01-17T00:28:36.439888045Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:28:36.440166 containerd[1969]: time="2026-01-17T00:28:36.439995352Z" level=info msg="RemovePodSandbox \"82a60a1b72cb75ffb4d1b9f0674e111d6410aff14b699674dda8563867649402\" returns successfully" Jan 17 00:28:36.440653 containerd[1969]: time="2026-01-17T00:28:36.440614676Z" level=info msg="StopPodSandbox for \"165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f\"" Jan 17 00:28:36.532285 containerd[1969]: 2026-01-17 00:28:36.491 [WARNING][5760] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--116-k8s-coredns--674b8bbfcf--wldcl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"dc3409d6-ff21-405a-b461-9d804b643b66", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 27, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-116", ContainerID:"61471b0e83758809d181e3925cfe1ff1c9d18627cfa2a09423d7a9faeb8672c5", Pod:"coredns-674b8bbfcf-wldcl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliccf8a46670d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:28:36.532285 containerd[1969]: 2026-01-17 00:28:36.492 [INFO][5760] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" Jan 17 00:28:36.532285 containerd[1969]: 2026-01-17 00:28:36.492 [INFO][5760] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" iface="eth0" netns="" Jan 17 00:28:36.532285 containerd[1969]: 2026-01-17 00:28:36.492 [INFO][5760] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" Jan 17 00:28:36.532285 containerd[1969]: 2026-01-17 00:28:36.492 [INFO][5760] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" Jan 17 00:28:36.532285 containerd[1969]: 2026-01-17 00:28:36.519 [INFO][5767] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" HandleID="k8s-pod-network.165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" Workload="ip--172--31--25--116-k8s-coredns--674b8bbfcf--wldcl-eth0" Jan 17 00:28:36.532285 containerd[1969]: 2026-01-17 00:28:36.519 [INFO][5767] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:28:36.532285 containerd[1969]: 2026-01-17 00:28:36.520 [INFO][5767] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:28:36.532285 containerd[1969]: 2026-01-17 00:28:36.526 [WARNING][5767] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" HandleID="k8s-pod-network.165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" Workload="ip--172--31--25--116-k8s-coredns--674b8bbfcf--wldcl-eth0" Jan 17 00:28:36.532285 containerd[1969]: 2026-01-17 00:28:36.526 [INFO][5767] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" HandleID="k8s-pod-network.165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" Workload="ip--172--31--25--116-k8s-coredns--674b8bbfcf--wldcl-eth0" Jan 17 00:28:36.532285 containerd[1969]: 2026-01-17 00:28:36.528 [INFO][5767] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:28:36.532285 containerd[1969]: 2026-01-17 00:28:36.530 [INFO][5760] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" Jan 17 00:28:36.532285 containerd[1969]: time="2026-01-17T00:28:36.532093962Z" level=info msg="TearDown network for sandbox \"165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f\" successfully" Jan 17 00:28:36.532285 containerd[1969]: time="2026-01-17T00:28:36.532117450Z" level=info msg="StopPodSandbox for \"165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f\" returns successfully" Jan 17 00:28:36.532818 containerd[1969]: time="2026-01-17T00:28:36.532542421Z" level=info msg="RemovePodSandbox for \"165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f\"" Jan 17 00:28:36.532818 containerd[1969]: time="2026-01-17T00:28:36.532569315Z" level=info msg="Forcibly stopping sandbox \"165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f\"" Jan 17 00:28:36.617436 containerd[1969]: 2026-01-17 00:28:36.568 [WARNING][5781] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--116-k8s-coredns--674b8bbfcf--wldcl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"dc3409d6-ff21-405a-b461-9d804b643b66", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 27, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-116", ContainerID:"61471b0e83758809d181e3925cfe1ff1c9d18627cfa2a09423d7a9faeb8672c5", Pod:"coredns-674b8bbfcf-wldcl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliccf8a46670d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:28:36.617436 containerd[1969]: 2026-01-17 00:28:36.568 [INFO][5781] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" Jan 17 00:28:36.617436 containerd[1969]: 2026-01-17 00:28:36.568 [INFO][5781] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" iface="eth0" netns="" Jan 17 00:28:36.617436 containerd[1969]: 2026-01-17 00:28:36.568 [INFO][5781] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" Jan 17 00:28:36.617436 containerd[1969]: 2026-01-17 00:28:36.568 [INFO][5781] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" Jan 17 00:28:36.617436 containerd[1969]: 2026-01-17 00:28:36.593 [INFO][5788] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" HandleID="k8s-pod-network.165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" Workload="ip--172--31--25--116-k8s-coredns--674b8bbfcf--wldcl-eth0" Jan 17 00:28:36.617436 containerd[1969]: 2026-01-17 00:28:36.593 [INFO][5788] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:28:36.617436 containerd[1969]: 2026-01-17 00:28:36.593 [INFO][5788] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:28:36.617436 containerd[1969]: 2026-01-17 00:28:36.606 [WARNING][5788] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" HandleID="k8s-pod-network.165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" Workload="ip--172--31--25--116-k8s-coredns--674b8bbfcf--wldcl-eth0" Jan 17 00:28:36.617436 containerd[1969]: 2026-01-17 00:28:36.606 [INFO][5788] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" HandleID="k8s-pod-network.165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" Workload="ip--172--31--25--116-k8s-coredns--674b8bbfcf--wldcl-eth0" Jan 17 00:28:36.617436 containerd[1969]: 2026-01-17 00:28:36.611 [INFO][5788] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:28:36.617436 containerd[1969]: 2026-01-17 00:28:36.614 [INFO][5781] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f" Jan 17 00:28:36.619100 containerd[1969]: time="2026-01-17T00:28:36.617484359Z" level=info msg="TearDown network for sandbox \"165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f\" successfully" Jan 17 00:28:36.624070 containerd[1969]: time="2026-01-17T00:28:36.624007174Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:28:36.624222 containerd[1969]: time="2026-01-17T00:28:36.624085613Z" level=info msg="RemovePodSandbox \"165bae627e9009d0877fad540d0b4d16ae663091e6f3d84d7a5b8387433bc64f\" returns successfully" Jan 17 00:28:36.624816 containerd[1969]: time="2026-01-17T00:28:36.624562768Z" level=info msg="StopPodSandbox for \"fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60\"" Jan 17 00:28:36.708521 containerd[1969]: 2026-01-17 00:28:36.663 [WARNING][5803] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--116-k8s-coredns--674b8bbfcf--6m4tc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"67a190ca-72c5-48e2-b272-116175d17788", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 27, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-116", ContainerID:"4fde1b80a9e399cedf6542dd9d4f885ab9d55f1f94270150e62d41e66e4166b1", Pod:"coredns-674b8bbfcf-6m4tc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7f3bd27d291", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:28:36.708521 containerd[1969]: 2026-01-17 00:28:36.664 [INFO][5803] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" Jan 17 00:28:36.708521 containerd[1969]: 2026-01-17 00:28:36.664 [INFO][5803] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" iface="eth0" netns="" Jan 17 00:28:36.708521 containerd[1969]: 2026-01-17 00:28:36.664 [INFO][5803] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" Jan 17 00:28:36.708521 containerd[1969]: 2026-01-17 00:28:36.664 [INFO][5803] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" Jan 17 00:28:36.708521 containerd[1969]: 2026-01-17 00:28:36.694 [INFO][5811] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" HandleID="k8s-pod-network.fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" Workload="ip--172--31--25--116-k8s-coredns--674b8bbfcf--6m4tc-eth0" Jan 17 00:28:36.708521 containerd[1969]: 2026-01-17 00:28:36.694 [INFO][5811] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:28:36.708521 containerd[1969]: 2026-01-17 00:28:36.694 [INFO][5811] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:28:36.708521 containerd[1969]: 2026-01-17 00:28:36.702 [WARNING][5811] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" HandleID="k8s-pod-network.fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" Workload="ip--172--31--25--116-k8s-coredns--674b8bbfcf--6m4tc-eth0" Jan 17 00:28:36.708521 containerd[1969]: 2026-01-17 00:28:36.702 [INFO][5811] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" HandleID="k8s-pod-network.fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" Workload="ip--172--31--25--116-k8s-coredns--674b8bbfcf--6m4tc-eth0" Jan 17 00:28:36.708521 containerd[1969]: 2026-01-17 00:28:36.705 [INFO][5811] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:28:36.708521 containerd[1969]: 2026-01-17 00:28:36.706 [INFO][5803] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" Jan 17 00:28:36.709412 containerd[1969]: time="2026-01-17T00:28:36.709060401Z" level=info msg="TearDown network for sandbox \"fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60\" successfully" Jan 17 00:28:36.709412 containerd[1969]: time="2026-01-17T00:28:36.709091964Z" level=info msg="StopPodSandbox for \"fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60\" returns successfully" Jan 17 00:28:36.709657 containerd[1969]: time="2026-01-17T00:28:36.709631316Z" level=info msg="RemovePodSandbox for \"fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60\"" Jan 17 00:28:36.709737 containerd[1969]: time="2026-01-17T00:28:36.709663531Z" level=info msg="Forcibly stopping sandbox \"fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60\"" Jan 17 00:28:36.808089 containerd[1969]: 2026-01-17 00:28:36.764 [WARNING][5824] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--116-k8s-coredns--674b8bbfcf--6m4tc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"67a190ca-72c5-48e2-b272-116175d17788", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 27, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-116", ContainerID:"4fde1b80a9e399cedf6542dd9d4f885ab9d55f1f94270150e62d41e66e4166b1", Pod:"coredns-674b8bbfcf-6m4tc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7f3bd27d291", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:28:36.808089 containerd[1969]: 2026-01-17 00:28:36.765 [INFO][5824] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" Jan 17 00:28:36.808089 containerd[1969]: 2026-01-17 00:28:36.766 [INFO][5824] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" iface="eth0" netns="" Jan 17 00:28:36.808089 containerd[1969]: 2026-01-17 00:28:36.766 [INFO][5824] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" Jan 17 00:28:36.808089 containerd[1969]: 2026-01-17 00:28:36.766 [INFO][5824] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" Jan 17 00:28:36.808089 containerd[1969]: 2026-01-17 00:28:36.791 [INFO][5831] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" HandleID="k8s-pod-network.fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" Workload="ip--172--31--25--116-k8s-coredns--674b8bbfcf--6m4tc-eth0" Jan 17 00:28:36.808089 containerd[1969]: 2026-01-17 00:28:36.791 [INFO][5831] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:28:36.808089 containerd[1969]: 2026-01-17 00:28:36.791 [INFO][5831] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:28:36.808089 containerd[1969]: 2026-01-17 00:28:36.798 [WARNING][5831] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" HandleID="k8s-pod-network.fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" Workload="ip--172--31--25--116-k8s-coredns--674b8bbfcf--6m4tc-eth0" Jan 17 00:28:36.808089 containerd[1969]: 2026-01-17 00:28:36.798 [INFO][5831] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" HandleID="k8s-pod-network.fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" Workload="ip--172--31--25--116-k8s-coredns--674b8bbfcf--6m4tc-eth0" Jan 17 00:28:36.808089 containerd[1969]: 2026-01-17 00:28:36.800 [INFO][5831] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:28:36.808089 containerd[1969]: 2026-01-17 00:28:36.802 [INFO][5824] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60" Jan 17 00:28:36.808089 containerd[1969]: time="2026-01-17T00:28:36.807177939Z" level=info msg="TearDown network for sandbox \"fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60\" successfully" Jan 17 00:28:36.817292 containerd[1969]: time="2026-01-17T00:28:36.816794378Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:28:36.817292 containerd[1969]: time="2026-01-17T00:28:36.816863342Z" level=info msg="RemovePodSandbox \"fb511b0036920e53a69e43db22392440bb1ca585f3836d1a38147d2ae20b8a60\" returns successfully" Jan 17 00:28:36.817459 containerd[1969]: time="2026-01-17T00:28:36.817416601Z" level=info msg="StopPodSandbox for \"aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2\"" Jan 17 00:28:36.899574 containerd[1969]: 2026-01-17 00:28:36.863 [WARNING][5846] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--116-k8s-csi--node--driver--5p9mr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-116", ContainerID:"ba23bbc6d669a742a006541db3db1948e3301f4c65758f2c2d7d6488dcbd17fd", Pod:"csi-node-driver-5p9mr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib9c9f76ed17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:28:36.899574 containerd[1969]: 2026-01-17 00:28:36.863 [INFO][5846] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" Jan 17 00:28:36.899574 containerd[1969]: 2026-01-17 00:28:36.863 [INFO][5846] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" iface="eth0" netns="" Jan 17 00:28:36.899574 containerd[1969]: 2026-01-17 00:28:36.863 [INFO][5846] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" Jan 17 00:28:36.899574 containerd[1969]: 2026-01-17 00:28:36.863 [INFO][5846] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" Jan 17 00:28:36.899574 containerd[1969]: 2026-01-17 00:28:36.886 [INFO][5854] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" HandleID="k8s-pod-network.aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" Workload="ip--172--31--25--116-k8s-csi--node--driver--5p9mr-eth0" Jan 17 00:28:36.899574 containerd[1969]: 2026-01-17 00:28:36.887 [INFO][5854] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:28:36.899574 containerd[1969]: 2026-01-17 00:28:36.887 [INFO][5854] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:28:36.899574 containerd[1969]: 2026-01-17 00:28:36.893 [WARNING][5854] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" HandleID="k8s-pod-network.aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" Workload="ip--172--31--25--116-k8s-csi--node--driver--5p9mr-eth0" Jan 17 00:28:36.899574 containerd[1969]: 2026-01-17 00:28:36.893 [INFO][5854] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" HandleID="k8s-pod-network.aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" Workload="ip--172--31--25--116-k8s-csi--node--driver--5p9mr-eth0" Jan 17 00:28:36.899574 containerd[1969]: 2026-01-17 00:28:36.895 [INFO][5854] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:28:36.899574 containerd[1969]: 2026-01-17 00:28:36.897 [INFO][5846] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" Jan 17 00:28:36.901312 containerd[1969]: time="2026-01-17T00:28:36.899700155Z" level=info msg="TearDown network for sandbox \"aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2\" successfully" Jan 17 00:28:36.901312 containerd[1969]: time="2026-01-17T00:28:36.899726246Z" level=info msg="StopPodSandbox for \"aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2\" returns successfully" Jan 17 00:28:36.901312 containerd[1969]: time="2026-01-17T00:28:36.900354340Z" level=info msg="RemovePodSandbox for \"aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2\"" Jan 17 00:28:36.901312 containerd[1969]: time="2026-01-17T00:28:36.900380515Z" level=info msg="Forcibly stopping sandbox \"aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2\"" Jan 17 00:28:36.986660 containerd[1969]: 2026-01-17 00:28:36.943 [WARNING][5868] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--116-k8s-csi--node--driver--5p9mr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-116", ContainerID:"ba23bbc6d669a742a006541db3db1948e3301f4c65758f2c2d7d6488dcbd17fd", Pod:"csi-node-driver-5p9mr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib9c9f76ed17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:28:36.986660 containerd[1969]: 2026-01-17 00:28:36.944 [INFO][5868] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" Jan 17 00:28:36.986660 containerd[1969]: 2026-01-17 00:28:36.944 [INFO][5868] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" iface="eth0" netns="" Jan 17 00:28:36.986660 containerd[1969]: 2026-01-17 00:28:36.944 [INFO][5868] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" Jan 17 00:28:36.986660 containerd[1969]: 2026-01-17 00:28:36.944 [INFO][5868] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" Jan 17 00:28:36.986660 containerd[1969]: 2026-01-17 00:28:36.969 [INFO][5875] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" HandleID="k8s-pod-network.aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" Workload="ip--172--31--25--116-k8s-csi--node--driver--5p9mr-eth0" Jan 17 00:28:36.986660 containerd[1969]: 2026-01-17 00:28:36.969 [INFO][5875] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:28:36.986660 containerd[1969]: 2026-01-17 00:28:36.969 [INFO][5875] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:28:36.986660 containerd[1969]: 2026-01-17 00:28:36.980 [WARNING][5875] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" HandleID="k8s-pod-network.aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" Workload="ip--172--31--25--116-k8s-csi--node--driver--5p9mr-eth0" Jan 17 00:28:36.986660 containerd[1969]: 2026-01-17 00:28:36.980 [INFO][5875] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" HandleID="k8s-pod-network.aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" Workload="ip--172--31--25--116-k8s-csi--node--driver--5p9mr-eth0" Jan 17 00:28:36.986660 containerd[1969]: 2026-01-17 00:28:36.982 [INFO][5875] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:28:36.986660 containerd[1969]: 2026-01-17 00:28:36.984 [INFO][5868] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2" Jan 17 00:28:36.987277 containerd[1969]: time="2026-01-17T00:28:36.986699664Z" level=info msg="TearDown network for sandbox \"aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2\" successfully" Jan 17 00:28:36.992430 containerd[1969]: time="2026-01-17T00:28:36.992342476Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:28:36.992430 containerd[1969]: time="2026-01-17T00:28:36.992412045Z" level=info msg="RemovePodSandbox \"aa623554b88e2d365c9ca4de4766176a6a3a4f148006519ede81bf4a4a4e44b2\" returns successfully" Jan 17 00:28:36.993214 containerd[1969]: time="2026-01-17T00:28:36.992930807Z" level=info msg="StopPodSandbox for \"132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c\"" Jan 17 00:28:37.068844 containerd[1969]: 2026-01-17 00:28:37.030 [WARNING][5889] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--116-k8s-calico--apiserver--6747446b5--k9mxk-eth0", GenerateName:"calico-apiserver-6747446b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b97d5ab-19c5-4717-a6ca-1a7a01547f6c", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 27, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6747446b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-116", ContainerID:"86de9774e0201c6502a41f656335f915265f2534d55a209acd5f11060f9dff43", Pod:"calico-apiserver-6747446b5-k9mxk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic6340591d54", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:28:37.068844 containerd[1969]: 2026-01-17 00:28:37.031 [INFO][5889] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" Jan 17 00:28:37.068844 containerd[1969]: 2026-01-17 00:28:37.031 [INFO][5889] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" iface="eth0" netns="" Jan 17 00:28:37.068844 containerd[1969]: 2026-01-17 00:28:37.031 [INFO][5889] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" Jan 17 00:28:37.068844 containerd[1969]: 2026-01-17 00:28:37.031 [INFO][5889] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" Jan 17 00:28:37.068844 containerd[1969]: 2026-01-17 00:28:37.053 [INFO][5897] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" HandleID="k8s-pod-network.132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" Workload="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--k9mxk-eth0" Jan 17 00:28:37.068844 containerd[1969]: 2026-01-17 00:28:37.053 [INFO][5897] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:28:37.068844 containerd[1969]: 2026-01-17 00:28:37.053 [INFO][5897] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:28:37.068844 containerd[1969]: 2026-01-17 00:28:37.059 [WARNING][5897] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" HandleID="k8s-pod-network.132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" Workload="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--k9mxk-eth0" Jan 17 00:28:37.068844 containerd[1969]: 2026-01-17 00:28:37.060 [INFO][5897] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" HandleID="k8s-pod-network.132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" Workload="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--k9mxk-eth0" Jan 17 00:28:37.068844 containerd[1969]: 2026-01-17 00:28:37.062 [INFO][5897] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:28:37.068844 containerd[1969]: 2026-01-17 00:28:37.063 [INFO][5889] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" Jan 17 00:28:37.070191 containerd[1969]: time="2026-01-17T00:28:37.068882972Z" level=info msg="TearDown network for sandbox \"132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c\" successfully" Jan 17 00:28:37.070191 containerd[1969]: time="2026-01-17T00:28:37.068911310Z" level=info msg="StopPodSandbox for \"132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c\" returns successfully" Jan 17 00:28:37.070191 containerd[1969]: time="2026-01-17T00:28:37.069614270Z" level=info msg="RemovePodSandbox for \"132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c\"" Jan 17 00:28:37.070191 containerd[1969]: time="2026-01-17T00:28:37.069644951Z" level=info msg="Forcibly stopping sandbox \"132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c\"" Jan 17 00:28:37.143240 containerd[1969]: 2026-01-17 00:28:37.107 [WARNING][5911] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--116-k8s-calico--apiserver--6747446b5--k9mxk-eth0", GenerateName:"calico-apiserver-6747446b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b97d5ab-19c5-4717-a6ca-1a7a01547f6c", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 27, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6747446b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-116", ContainerID:"86de9774e0201c6502a41f656335f915265f2534d55a209acd5f11060f9dff43", Pod:"calico-apiserver-6747446b5-k9mxk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic6340591d54", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:28:37.143240 containerd[1969]: 2026-01-17 00:28:37.107 [INFO][5911] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" Jan 17 00:28:37.143240 containerd[1969]: 2026-01-17 00:28:37.107 [INFO][5911] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" iface="eth0" netns="" Jan 17 00:28:37.143240 containerd[1969]: 2026-01-17 00:28:37.107 [INFO][5911] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" Jan 17 00:28:37.143240 containerd[1969]: 2026-01-17 00:28:37.107 [INFO][5911] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" Jan 17 00:28:37.143240 containerd[1969]: 2026-01-17 00:28:37.130 [INFO][5919] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" HandleID="k8s-pod-network.132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" Workload="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--k9mxk-eth0" Jan 17 00:28:37.143240 containerd[1969]: 2026-01-17 00:28:37.131 [INFO][5919] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:28:37.143240 containerd[1969]: 2026-01-17 00:28:37.131 [INFO][5919] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:28:37.143240 containerd[1969]: 2026-01-17 00:28:37.137 [WARNING][5919] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" HandleID="k8s-pod-network.132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" Workload="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--k9mxk-eth0" Jan 17 00:28:37.143240 containerd[1969]: 2026-01-17 00:28:37.137 [INFO][5919] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" HandleID="k8s-pod-network.132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" Workload="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--k9mxk-eth0" Jan 17 00:28:37.143240 containerd[1969]: 2026-01-17 00:28:37.139 [INFO][5919] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:28:37.143240 containerd[1969]: 2026-01-17 00:28:37.141 [INFO][5911] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c" Jan 17 00:28:37.144172 containerd[1969]: time="2026-01-17T00:28:37.143289774Z" level=info msg="TearDown network for sandbox \"132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c\" successfully" Jan 17 00:28:37.149425 containerd[1969]: time="2026-01-17T00:28:37.149367769Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:28:37.149691 containerd[1969]: time="2026-01-17T00:28:37.149440322Z" level=info msg="RemovePodSandbox \"132e4c7cf2c82242fcc84d095452e816156babc424b398e2092142c694c90b4c\" returns successfully" Jan 17 00:28:37.149946 containerd[1969]: time="2026-01-17T00:28:37.149918689Z" level=info msg="StopPodSandbox for \"a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800\"" Jan 17 00:28:37.234208 containerd[1969]: 2026-01-17 00:28:37.190 [WARNING][5933] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--116-k8s-calico--kube--controllers--877bf5958--fmwqm-eth0", GenerateName:"calico-kube-controllers-877bf5958-", Namespace:"calico-system", SelfLink:"", UID:"5fd74b61-87d1-45e4-b949-57645e5eb510", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"877bf5958", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-116", ContainerID:"ad6ff8343b037b464c6113d2bc831745f879aea947b8357da0bcae6a6d8744a0", Pod:"calico-kube-controllers-877bf5958-fmwqm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3a52d1adafb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:28:37.234208 containerd[1969]: 2026-01-17 00:28:37.191 [INFO][5933] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" Jan 17 00:28:37.234208 containerd[1969]: 2026-01-17 00:28:37.191 [INFO][5933] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" iface="eth0" netns="" Jan 17 00:28:37.234208 containerd[1969]: 2026-01-17 00:28:37.191 [INFO][5933] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" Jan 17 00:28:37.234208 containerd[1969]: 2026-01-17 00:28:37.191 [INFO][5933] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" Jan 17 00:28:37.234208 containerd[1969]: 2026-01-17 00:28:37.217 [INFO][5941] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" HandleID="k8s-pod-network.a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" Workload="ip--172--31--25--116-k8s-calico--kube--controllers--877bf5958--fmwqm-eth0" Jan 17 00:28:37.234208 containerd[1969]: 2026-01-17 00:28:37.217 [INFO][5941] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:28:37.234208 containerd[1969]: 2026-01-17 00:28:37.217 [INFO][5941] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:28:37.234208 containerd[1969]: 2026-01-17 00:28:37.227 [WARNING][5941] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" HandleID="k8s-pod-network.a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" Workload="ip--172--31--25--116-k8s-calico--kube--controllers--877bf5958--fmwqm-eth0" Jan 17 00:28:37.234208 containerd[1969]: 2026-01-17 00:28:37.227 [INFO][5941] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" HandleID="k8s-pod-network.a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" Workload="ip--172--31--25--116-k8s-calico--kube--controllers--877bf5958--fmwqm-eth0" Jan 17 00:28:37.234208 containerd[1969]: 2026-01-17 00:28:37.230 [INFO][5941] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:28:37.234208 containerd[1969]: 2026-01-17 00:28:37.232 [INFO][5933] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" Jan 17 00:28:37.235351 containerd[1969]: time="2026-01-17T00:28:37.234255956Z" level=info msg="TearDown network for sandbox \"a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800\" successfully" Jan 17 00:28:37.235351 containerd[1969]: time="2026-01-17T00:28:37.234285626Z" level=info msg="StopPodSandbox for \"a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800\" returns successfully" Jan 17 00:28:37.235351 containerd[1969]: time="2026-01-17T00:28:37.234843604Z" level=info msg="RemovePodSandbox for \"a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800\"" Jan 17 00:28:37.235351 containerd[1969]: time="2026-01-17T00:28:37.234878047Z" level=info msg="Forcibly stopping sandbox \"a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800\"" Jan 17 00:28:37.270981 containerd[1969]: time="2026-01-17T00:28:37.270935486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:28:37.338498 containerd[1969]: 2026-01-17 00:28:37.281 [WARNING][5955] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--116-k8s-calico--kube--controllers--877bf5958--fmwqm-eth0", GenerateName:"calico-kube-controllers-877bf5958-", Namespace:"calico-system", SelfLink:"", UID:"5fd74b61-87d1-45e4-b949-57645e5eb510", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 27, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"877bf5958", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-116", ContainerID:"ad6ff8343b037b464c6113d2bc831745f879aea947b8357da0bcae6a6d8744a0", Pod:"calico-kube-controllers-877bf5958-fmwqm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3a52d1adafb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:28:37.338498 containerd[1969]: 2026-01-17 00:28:37.282 [INFO][5955] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" Jan 17 00:28:37.338498 containerd[1969]: 2026-01-17 00:28:37.282 [INFO][5955] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" iface="eth0" netns="" Jan 17 00:28:37.338498 containerd[1969]: 2026-01-17 00:28:37.282 [INFO][5955] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" Jan 17 00:28:37.338498 containerd[1969]: 2026-01-17 00:28:37.282 [INFO][5955] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" Jan 17 00:28:37.338498 containerd[1969]: 2026-01-17 00:28:37.324 [INFO][5962] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" HandleID="k8s-pod-network.a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" Workload="ip--172--31--25--116-k8s-calico--kube--controllers--877bf5958--fmwqm-eth0" Jan 17 00:28:37.338498 containerd[1969]: 2026-01-17 00:28:37.324 [INFO][5962] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:28:37.338498 containerd[1969]: 2026-01-17 00:28:37.325 [INFO][5962] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:28:37.338498 containerd[1969]: 2026-01-17 00:28:37.330 [WARNING][5962] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" HandleID="k8s-pod-network.a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" Workload="ip--172--31--25--116-k8s-calico--kube--controllers--877bf5958--fmwqm-eth0" Jan 17 00:28:37.338498 containerd[1969]: 2026-01-17 00:28:37.330 [INFO][5962] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" HandleID="k8s-pod-network.a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" Workload="ip--172--31--25--116-k8s-calico--kube--controllers--877bf5958--fmwqm-eth0" Jan 17 00:28:37.338498 containerd[1969]: 2026-01-17 00:28:37.334 [INFO][5962] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:28:37.338498 containerd[1969]: 2026-01-17 00:28:37.336 [INFO][5955] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800" Jan 17 00:28:37.339328 containerd[1969]: time="2026-01-17T00:28:37.338494010Z" level=info msg="TearDown network for sandbox \"a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800\" successfully" Jan 17 00:28:37.344902 containerd[1969]: time="2026-01-17T00:28:37.344837732Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:28:37.344902 containerd[1969]: time="2026-01-17T00:28:37.344898860Z" level=info msg="RemovePodSandbox \"a50d4ef57d8d60b323c10791d3fd043454b2a1a72344112797fe549954a0c800\" returns successfully" Jan 17 00:28:37.345423 containerd[1969]: time="2026-01-17T00:28:37.345399359Z" level=info msg="StopPodSandbox for \"23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa\"" Jan 17 00:28:37.418741 containerd[1969]: 2026-01-17 00:28:37.382 [WARNING][5976] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--116-k8s-goldmane--666569f655--6bww6-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"394e468b-e5d2-4096-94d5-a6a60d966235", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 27, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-116", ContainerID:"7cdf1caebcccbcc1b953ba7c63251016ca75f9f14091d04597f96c56826c3b23", Pod:"goldmane-666569f655-6bww6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.47.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9f3dfb8abd9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:28:37.418741 containerd[1969]: 2026-01-17 00:28:37.382 [INFO][5976] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" Jan 17 00:28:37.418741 containerd[1969]: 2026-01-17 00:28:37.382 [INFO][5976] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" iface="eth0" netns="" Jan 17 00:28:37.418741 containerd[1969]: 2026-01-17 00:28:37.382 [INFO][5976] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" Jan 17 00:28:37.418741 containerd[1969]: 2026-01-17 00:28:37.383 [INFO][5976] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" Jan 17 00:28:37.418741 containerd[1969]: 2026-01-17 00:28:37.405 [INFO][5983] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" HandleID="k8s-pod-network.23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" Workload="ip--172--31--25--116-k8s-goldmane--666569f655--6bww6-eth0" Jan 17 00:28:37.418741 containerd[1969]: 2026-01-17 00:28:37.405 [INFO][5983] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:28:37.418741 containerd[1969]: 2026-01-17 00:28:37.405 [INFO][5983] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:28:37.418741 containerd[1969]: 2026-01-17 00:28:37.412 [WARNING][5983] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" HandleID="k8s-pod-network.23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" Workload="ip--172--31--25--116-k8s-goldmane--666569f655--6bww6-eth0" Jan 17 00:28:37.418741 containerd[1969]: 2026-01-17 00:28:37.412 [INFO][5983] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" HandleID="k8s-pod-network.23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" Workload="ip--172--31--25--116-k8s-goldmane--666569f655--6bww6-eth0" Jan 17 00:28:37.418741 containerd[1969]: 2026-01-17 00:28:37.414 [INFO][5983] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:28:37.418741 containerd[1969]: 2026-01-17 00:28:37.416 [INFO][5976] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" Jan 17 00:28:37.418741 containerd[1969]: time="2026-01-17T00:28:37.418570309Z" level=info msg="TearDown network for sandbox \"23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa\" successfully" Jan 17 00:28:37.418741 containerd[1969]: time="2026-01-17T00:28:37.418595113Z" level=info msg="StopPodSandbox for \"23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa\" returns successfully" Jan 17 00:28:37.419520 containerd[1969]: time="2026-01-17T00:28:37.419487858Z" level=info msg="RemovePodSandbox for \"23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa\"" Jan 17 00:28:37.419520 containerd[1969]: time="2026-01-17T00:28:37.419523787Z" level=info msg="Forcibly stopping sandbox \"23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa\"" Jan 17 00:28:37.498694 containerd[1969]: 2026-01-17 00:28:37.462 [WARNING][5997] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--116-k8s-goldmane--666569f655--6bww6-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"394e468b-e5d2-4096-94d5-a6a60d966235", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 27, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-116", ContainerID:"7cdf1caebcccbcc1b953ba7c63251016ca75f9f14091d04597f96c56826c3b23", Pod:"goldmane-666569f655-6bww6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.47.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9f3dfb8abd9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:28:37.498694 containerd[1969]: 2026-01-17 00:28:37.462 [INFO][5997] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" Jan 17 00:28:37.498694 containerd[1969]: 2026-01-17 00:28:37.462 [INFO][5997] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" iface="eth0" netns="" Jan 17 00:28:37.498694 containerd[1969]: 2026-01-17 00:28:37.462 [INFO][5997] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" Jan 17 00:28:37.498694 containerd[1969]: 2026-01-17 00:28:37.462 [INFO][5997] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" Jan 17 00:28:37.498694 containerd[1969]: 2026-01-17 00:28:37.484 [INFO][6004] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" HandleID="k8s-pod-network.23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" Workload="ip--172--31--25--116-k8s-goldmane--666569f655--6bww6-eth0" Jan 17 00:28:37.498694 containerd[1969]: 2026-01-17 00:28:37.484 [INFO][6004] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:28:37.498694 containerd[1969]: 2026-01-17 00:28:37.484 [INFO][6004] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:28:37.498694 containerd[1969]: 2026-01-17 00:28:37.492 [WARNING][6004] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" HandleID="k8s-pod-network.23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" Workload="ip--172--31--25--116-k8s-goldmane--666569f655--6bww6-eth0" Jan 17 00:28:37.498694 containerd[1969]: 2026-01-17 00:28:37.492 [INFO][6004] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" HandleID="k8s-pod-network.23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" Workload="ip--172--31--25--116-k8s-goldmane--666569f655--6bww6-eth0" Jan 17 00:28:37.498694 containerd[1969]: 2026-01-17 00:28:37.494 [INFO][6004] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:28:37.498694 containerd[1969]: 2026-01-17 00:28:37.496 [INFO][5997] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa" Jan 17 00:28:37.499225 containerd[1969]: time="2026-01-17T00:28:37.498740636Z" level=info msg="TearDown network for sandbox \"23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa\" successfully" Jan 17 00:28:37.504312 containerd[1969]: time="2026-01-17T00:28:37.504260623Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:28:37.504453 containerd[1969]: time="2026-01-17T00:28:37.504339613Z" level=info msg="RemovePodSandbox \"23f000ca8b6d45a1c5378a80a0f963e1f41694609a448648020fa4684cd7faaa\" returns successfully" Jan 17 00:28:37.505082 containerd[1969]: time="2026-01-17T00:28:37.505052136Z" level=info msg="StopPodSandbox for \"6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831\"" Jan 17 00:28:37.555620 containerd[1969]: time="2026-01-17T00:28:37.555394511Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:28:37.558068 containerd[1969]: time="2026-01-17T00:28:37.558006298Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:28:37.558240 containerd[1969]: time="2026-01-17T00:28:37.558020541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:28:37.558368 kubelet[3181]: E0117 00:28:37.558319 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:28:37.559075 kubelet[3181]: E0117 00:28:37.558385 3181 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:28:37.559075 kubelet[3181]: E0117 00:28:37.558741 3181 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cnjs6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-877bf5958-fmwqm_calico-system(5fd74b61-87d1-45e4-b949-57645e5eb510): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:28:37.560301 containerd[1969]: time="2026-01-17T00:28:37.560042874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:28:37.560467 kubelet[3181]: E0117 00:28:37.560085 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-877bf5958-fmwqm" podUID="5fd74b61-87d1-45e4-b949-57645e5eb510" Jan 17 00:28:37.595586 containerd[1969]: 2026-01-17 00:28:37.540 [WARNING][6018] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--116-k8s-calico--apiserver--6747446b5--7hcx6-eth0", GenerateName:"calico-apiserver-6747446b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"cc407ca1-a787-4c80-b23e-a6c88347fad4", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 27, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6747446b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-116", ContainerID:"de48a006c17f7c4f085acc35dfafa06532bff9c85f0a2e8a6bd2ff73ee2126a1", Pod:"calico-apiserver-6747446b5-7hcx6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4302629f2a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:28:37.595586 containerd[1969]: 2026-01-17 00:28:37.541 [INFO][6018] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" Jan 17 00:28:37.595586 containerd[1969]: 2026-01-17 00:28:37.541 [INFO][6018] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" iface="eth0" netns="" Jan 17 00:28:37.595586 containerd[1969]: 2026-01-17 00:28:37.541 [INFO][6018] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" Jan 17 00:28:37.595586 containerd[1969]: 2026-01-17 00:28:37.541 [INFO][6018] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" Jan 17 00:28:37.595586 containerd[1969]: 2026-01-17 00:28:37.576 [INFO][6025] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" HandleID="k8s-pod-network.6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" Workload="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--7hcx6-eth0" Jan 17 00:28:37.595586 containerd[1969]: 2026-01-17 00:28:37.576 [INFO][6025] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:28:37.595586 containerd[1969]: 2026-01-17 00:28:37.576 [INFO][6025] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:28:37.595586 containerd[1969]: 2026-01-17 00:28:37.587 [WARNING][6025] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" HandleID="k8s-pod-network.6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" Workload="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--7hcx6-eth0" Jan 17 00:28:37.595586 containerd[1969]: 2026-01-17 00:28:37.587 [INFO][6025] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" HandleID="k8s-pod-network.6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" Workload="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--7hcx6-eth0" Jan 17 00:28:37.595586 containerd[1969]: 2026-01-17 00:28:37.591 [INFO][6025] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:28:37.595586 containerd[1969]: 2026-01-17 00:28:37.593 [INFO][6018] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" Jan 17 00:28:37.598402 containerd[1969]: time="2026-01-17T00:28:37.595858318Z" level=info msg="TearDown network for sandbox \"6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831\" successfully" Jan 17 00:28:37.598402 containerd[1969]: time="2026-01-17T00:28:37.595906737Z" level=info msg="StopPodSandbox for \"6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831\" returns successfully" Jan 17 00:28:37.598402 containerd[1969]: time="2026-01-17T00:28:37.597692300Z" level=info msg="RemovePodSandbox for \"6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831\"" Jan 17 00:28:37.598402 containerd[1969]: time="2026-01-17T00:28:37.597731979Z" level=info msg="Forcibly stopping sandbox \"6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831\"" Jan 17 00:28:37.673358 containerd[1969]: 2026-01-17 00:28:37.634 [WARNING][6039] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--116-k8s-calico--apiserver--6747446b5--7hcx6-eth0", GenerateName:"calico-apiserver-6747446b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"cc407ca1-a787-4c80-b23e-a6c88347fad4", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 27, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6747446b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-116", ContainerID:"de48a006c17f7c4f085acc35dfafa06532bff9c85f0a2e8a6bd2ff73ee2126a1", Pod:"calico-apiserver-6747446b5-7hcx6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4302629f2a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:28:37.673358 containerd[1969]: 2026-01-17 00:28:37.634 [INFO][6039] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" Jan 17 00:28:37.673358 containerd[1969]: 2026-01-17 00:28:37.634 [INFO][6039] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" iface="eth0" netns="" Jan 17 00:28:37.673358 containerd[1969]: 2026-01-17 00:28:37.634 [INFO][6039] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" Jan 17 00:28:37.673358 containerd[1969]: 2026-01-17 00:28:37.635 [INFO][6039] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" Jan 17 00:28:37.673358 containerd[1969]: 2026-01-17 00:28:37.659 [INFO][6046] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" HandleID="k8s-pod-network.6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" Workload="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--7hcx6-eth0" Jan 17 00:28:37.673358 containerd[1969]: 2026-01-17 00:28:37.659 [INFO][6046] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:28:37.673358 containerd[1969]: 2026-01-17 00:28:37.659 [INFO][6046] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:28:37.673358 containerd[1969]: 2026-01-17 00:28:37.666 [WARNING][6046] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" HandleID="k8s-pod-network.6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" Workload="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--7hcx6-eth0" Jan 17 00:28:37.673358 containerd[1969]: 2026-01-17 00:28:37.666 [INFO][6046] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" HandleID="k8s-pod-network.6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" Workload="ip--172--31--25--116-k8s-calico--apiserver--6747446b5--7hcx6-eth0" Jan 17 00:28:37.673358 containerd[1969]: 2026-01-17 00:28:37.668 [INFO][6046] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:28:37.673358 containerd[1969]: 2026-01-17 00:28:37.670 [INFO][6039] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831" Jan 17 00:28:37.673358 containerd[1969]: time="2026-01-17T00:28:37.672125089Z" level=info msg="TearDown network for sandbox \"6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831\" successfully" Jan 17 00:28:37.677491 containerd[1969]: time="2026-01-17T00:28:37.677440998Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:28:37.677703 containerd[1969]: time="2026-01-17T00:28:37.677673680Z" level=info msg="RemovePodSandbox \"6c7940e677c655fa1cbbad2d7d2e777c9593b01e43dfea4149b7a7c853bd3831\" returns successfully" Jan 17 00:28:37.835590 containerd[1969]: time="2026-01-17T00:28:37.835544830Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:28:37.837774 containerd[1969]: time="2026-01-17T00:28:37.837692580Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:28:37.837944 containerd[1969]: time="2026-01-17T00:28:37.837824600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:28:37.838052 kubelet[3181]: E0117 00:28:37.837997 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:28:37.838112 kubelet[3181]: E0117 00:28:37.838058 3181 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:28:37.838657 kubelet[3181]: E0117 00:28:37.838222 3181 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xlgp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6747446b5-7hcx6_calico-apiserver(cc407ca1-a787-4c80-b23e-a6c88347fad4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:28:37.839479 kubelet[3181]: E0117 00:28:37.839406 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6747446b5-7hcx6" podUID="cc407ca1-a787-4c80-b23e-a6c88347fad4" Jan 17 00:28:39.268347 containerd[1969]: time="2026-01-17T00:28:39.268133098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:28:39.572303 containerd[1969]: time="2026-01-17T00:28:39.572165408Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:28:39.574396 containerd[1969]: time="2026-01-17T00:28:39.574338377Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:28:39.574396 containerd[1969]: time="2026-01-17T00:28:39.574354217Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:28:39.575160 kubelet[3181]: E0117 00:28:39.574788 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:28:39.575160 kubelet[3181]: E0117 00:28:39.574838 3181 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:28:39.575530 kubelet[3181]: E0117 00:28:39.575210 3181 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n6f9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6747446b5-k9mxk_calico-apiserver(0b97d5ab-19c5-4717-a6ca-1a7a01547f6c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:28:39.576114 containerd[1969]: time="2026-01-17T00:28:39.576052112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:28:39.577351 kubelet[3181]: E0117 00:28:39.577308 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6747446b5-k9mxk" podUID="0b97d5ab-19c5-4717-a6ca-1a7a01547f6c" Jan 17 00:28:39.872386 containerd[1969]: time="2026-01-17T00:28:39.872260676Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:28:39.874503 containerd[1969]: time="2026-01-17T00:28:39.874445279Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:28:39.874602 containerd[1969]: time="2026-01-17T00:28:39.874530139Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:28:39.874724 kubelet[3181]: E0117 00:28:39.874689 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:28:39.874792 kubelet[3181]: E0117 00:28:39.874735 3181 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:28:39.875700 kubelet[3181]: E0117 00:28:39.875631 3181 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6sg4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5p9mr_calico-system(c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:28:39.877897 containerd[1969]: time="2026-01-17T00:28:39.877678540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:28:40.141398 containerd[1969]: time="2026-01-17T00:28:40.141237332Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:28:40.143458 containerd[1969]: time="2026-01-17T00:28:40.143333804Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:28:40.143458 containerd[1969]: time="2026-01-17T00:28:40.143409261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:28:40.143620 kubelet[3181]: E0117 00:28:40.143545 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:28:40.143620 kubelet[3181]: E0117 00:28:40.143588 3181 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:28:40.143757 kubelet[3181]: E0117 00:28:40.143703 3181 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6sg4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5p9mr_calico-system(c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:28:40.145314 kubelet[3181]: E0117 00:28:40.145252 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5p9mr" podUID="c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9" Jan 17 00:28:40.317070 systemd[1]: Started sshd@9-172.31.25.116:22-4.153.228.146:47384.service - OpenSSH per-connection server daemon (4.153.228.146:47384). Jan 17 00:28:40.898530 sshd[6059]: Accepted publickey for core from 4.153.228.146 port 47384 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:28:40.901731 sshd[6059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:40.907564 systemd-logind[1955]: New session 10 of user core. Jan 17 00:28:40.913008 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:28:41.397145 sshd[6059]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:41.400517 systemd[1]: sshd@9-172.31.25.116:22-4.153.228.146:47384.service: Deactivated successfully. Jan 17 00:28:41.402667 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:28:41.404278 systemd-logind[1955]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:28:41.406131 systemd-logind[1955]: Removed session 10. Jan 17 00:28:41.482167 systemd[1]: Started sshd@10-172.31.25.116:22-4.153.228.146:47398.service - OpenSSH per-connection server daemon (4.153.228.146:47398). Jan 17 00:28:41.965187 sshd[6073]: Accepted publickey for core from 4.153.228.146 port 47398 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:28:41.966808 sshd[6073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:41.971999 systemd-logind[1955]: New session 11 of user core. Jan 17 00:28:41.976927 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:28:42.444794 sshd[6073]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:42.449642 systemd-logind[1955]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:28:42.450474 systemd[1]: sshd@10-172.31.25.116:22-4.153.228.146:47398.service: Deactivated successfully. Jan 17 00:28:42.452704 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:28:42.453779 systemd-logind[1955]: Removed session 11. Jan 17 00:28:42.531964 systemd[1]: Started sshd@11-172.31.25.116:22-4.153.228.146:47404.service - OpenSSH per-connection server daemon (4.153.228.146:47404). Jan 17 00:28:43.033456 sshd[6084]: Accepted publickey for core from 4.153.228.146 port 47404 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:28:43.035146 sshd[6084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:43.040217 systemd-logind[1955]: New session 12 of user core. Jan 17 00:28:43.042957 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:28:43.470366 sshd[6084]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:43.474413 systemd[1]: sshd@11-172.31.25.116:22-4.153.228.146:47404.service: Deactivated successfully. Jan 17 00:28:43.476405 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:28:43.477327 systemd-logind[1955]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:28:43.479375 systemd-logind[1955]: Removed session 12. Jan 17 00:28:45.269161 kubelet[3181]: E0117 00:28:45.269103 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6656dcccd5-pnsfx" podUID="76314834-804d-441c-ad9c-ab52475d9d5c" Jan 17 00:28:47.268785 kubelet[3181]: E0117 00:28:47.268459 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6bww6" podUID="394e468b-e5d2-4096-94d5-a6a60d966235" Jan 17 00:28:48.268475 kubelet[3181]: E0117 00:28:48.268077 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-877bf5958-fmwqm" podUID="5fd74b61-87d1-45e4-b949-57645e5eb510" Jan 17 00:28:48.564151 systemd[1]: Started sshd@12-172.31.25.116:22-4.153.228.146:51480.service - OpenSSH per-connection server daemon (4.153.228.146:51480). Jan 17 00:28:49.040903 sshd[6105]: Accepted publickey for core from 4.153.228.146 port 51480 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:28:49.042349 sshd[6105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:49.047228 systemd-logind[1955]: New session 13 of user core. Jan 17 00:28:49.052438 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:28:49.478901 sshd[6105]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:49.484490 systemd[1]: sshd@12-172.31.25.116:22-4.153.228.146:51480.service: Deactivated successfully. Jan 17 00:28:49.486697 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:28:49.488465 systemd-logind[1955]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:28:49.489707 systemd-logind[1955]: Removed session 13. Jan 17 00:28:52.270893 kubelet[3181]: E0117 00:28:52.269459 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6747446b5-k9mxk" podUID="0b97d5ab-19c5-4717-a6ca-1a7a01547f6c" Jan 17 00:28:53.268268 kubelet[3181]: E0117 00:28:53.268183 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6747446b5-7hcx6" podUID="cc407ca1-a787-4c80-b23e-a6c88347fad4" Jan 17 00:28:54.269566 kubelet[3181]: E0117 00:28:54.269500 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5p9mr" podUID="c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9" Jan 17 00:28:54.571252 systemd[1]: Started sshd@13-172.31.25.116:22-4.153.228.146:49632.service - OpenSSH per-connection server daemon (4.153.228.146:49632). Jan 17 00:28:55.081839 sshd[6118]: Accepted publickey for core from 4.153.228.146 port 49632 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:28:55.083894 sshd[6118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:28:55.089726 systemd-logind[1955]: New session 14 of user core. Jan 17 00:28:55.095131 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:28:55.529827 sshd[6118]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:55.533075 systemd-logind[1955]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:28:55.533480 systemd[1]: sshd@13-172.31.25.116:22-4.153.228.146:49632.service: Deactivated successfully. Jan 17 00:28:55.536591 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:28:55.539637 systemd-logind[1955]: Removed session 14. Jan 17 00:28:59.269738 containerd[1969]: time="2026-01-17T00:28:59.269658562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:28:59.560599 containerd[1969]: time="2026-01-17T00:28:59.560270183Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:28:59.563660 containerd[1969]: time="2026-01-17T00:28:59.563581192Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:28:59.563810 containerd[1969]: time="2026-01-17T00:28:59.563668085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:28:59.563864 kubelet[3181]: E0117 00:28:59.563824 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:28:59.564257 kubelet[3181]: E0117 00:28:59.563872 3181 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:28:59.564257 kubelet[3181]: E0117 00:28:59.563989 3181 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:04b6a3f3f16b4f078d52b8b865750bfc,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fxcqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6656dcccd5-pnsfx_calico-system(76314834-804d-441c-ad9c-ab52475d9d5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:28:59.567264 containerd[1969]: time="2026-01-17T00:28:59.567224364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:28:59.840545 containerd[1969]: time="2026-01-17T00:28:59.840410839Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:28:59.936115 containerd[1969]: time="2026-01-17T00:28:59.936038157Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:28:59.936375 containerd[1969]: time="2026-01-17T00:28:59.936086650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:28:59.936481 kubelet[3181]: E0117 00:28:59.936429 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:28:59.936546 kubelet[3181]: E0117 00:28:59.936492 3181 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:28:59.936682 kubelet[3181]: E0117 00:28:59.936638 3181 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fxcqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6656dcccd5-pnsfx_calico-system(76314834-804d-441c-ad9c-ab52475d9d5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:28:59.938334 kubelet[3181]: E0117 00:28:59.938284 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6656dcccd5-pnsfx" podUID="76314834-804d-441c-ad9c-ab52475d9d5c" Jan 17 00:29:00.271933 containerd[1969]: time="2026-01-17T00:29:00.270337598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:29:00.529840 containerd[1969]: time="2026-01-17T00:29:00.529687121Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:29:00.531895 containerd[1969]: time="2026-01-17T00:29:00.531795062Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:29:00.531895 containerd[1969]: time="2026-01-17T00:29:00.531836785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:29:00.532130 kubelet[3181]: E0117 00:29:00.532001 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:29:00.532130 kubelet[3181]: E0117 00:29:00.532045 3181 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:29:00.532213 kubelet[3181]: E0117 00:29:00.532178 3181 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zc8vm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-6bww6_calico-system(394e468b-e5d2-4096-94d5-a6a60d966235): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:29:00.533657 kubelet[3181]: E0117 00:29:00.533611 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6bww6" podUID="394e468b-e5d2-4096-94d5-a6a60d966235" Jan 17 00:29:00.629891 systemd[1]: Started sshd@14-172.31.25.116:22-4.153.228.146:49644.service - OpenSSH per-connection server daemon (4.153.228.146:49644). Jan 17 00:29:01.221327 sshd[6161]: Accepted publickey for core from 4.153.228.146 port 49644 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:29:01.231587 sshd[6161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:01.245054 systemd-logind[1955]: New session 15 of user core. Jan 17 00:29:01.253065 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:29:02.417038 sshd[6161]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:02.513131 systemd-logind[1955]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:29:02.513655 systemd[1]: sshd@14-172.31.25.116:22-4.153.228.146:49644.service: Deactivated successfully. Jan 17 00:29:02.529004 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:29:02.541525 systemd-logind[1955]: Removed session 15. Jan 17 00:29:03.272498 containerd[1969]: time="2026-01-17T00:29:03.272441464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:29:03.538082 containerd[1969]: time="2026-01-17T00:29:03.537930512Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:29:03.540409 containerd[1969]: time="2026-01-17T00:29:03.540207927Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:29:03.540409 containerd[1969]: time="2026-01-17T00:29:03.540354644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:29:03.541521 kubelet[3181]: E0117 00:29:03.540836 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:29:03.541521 kubelet[3181]: E0117 00:29:03.540882 3181 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:29:03.541521 kubelet[3181]: E0117 00:29:03.541015 3181 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cnjs6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-877bf5958-fmwqm_calico-system(5fd74b61-87d1-45e4-b949-57645e5eb510): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:29:03.542719 kubelet[3181]: E0117 00:29:03.542675 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-877bf5958-fmwqm" podUID="5fd74b61-87d1-45e4-b949-57645e5eb510" Jan 17 00:29:06.270243 containerd[1969]: time="2026-01-17T00:29:06.270036157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:29:06.587975 containerd[1969]: time="2026-01-17T00:29:06.587846236Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:29:06.590002 containerd[1969]: time="2026-01-17T00:29:06.589923382Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:29:06.590215 containerd[1969]: time="2026-01-17T00:29:06.590014847Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:29:06.590258 kubelet[3181]: E0117 00:29:06.590171 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:29:06.590258 kubelet[3181]: E0117 00:29:06.590216 3181 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:29:06.590672 kubelet[3181]: E0117 00:29:06.590348 3181 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n6f9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6747446b5-k9mxk_calico-apiserver(0b97d5ab-19c5-4717-a6ca-1a7a01547f6c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:29:06.591848 kubelet[3181]: E0117 00:29:06.591806 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6747446b5-k9mxk" podUID="0b97d5ab-19c5-4717-a6ca-1a7a01547f6c" Jan 17 00:29:07.492092 systemd[1]: Started sshd@15-172.31.25.116:22-4.153.228.146:44832.service - OpenSSH per-connection server daemon (4.153.228.146:44832). Jan 17 00:29:08.040567 sshd[6176]: Accepted publickey for core from 4.153.228.146 port 44832 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:29:08.042933 sshd[6176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:08.048807 systemd-logind[1955]: New session 16 of user core. Jan 17 00:29:08.049944 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:29:08.271820 containerd[1969]: time="2026-01-17T00:29:08.270409111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:29:08.562580 containerd[1969]: time="2026-01-17T00:29:08.562419815Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:29:08.564526 containerd[1969]: time="2026-01-17T00:29:08.564455612Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:29:08.564685 containerd[1969]: time="2026-01-17T00:29:08.564549006Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:29:08.564778 kubelet[3181]: E0117 00:29:08.564715 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:29:08.565105 kubelet[3181]: E0117 00:29:08.564793 3181 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:29:08.565105 kubelet[3181]: E0117 00:29:08.565002 3181 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6sg4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5p9mr_calico-system(c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:29:08.566327 containerd[1969]: time="2026-01-17T00:29:08.566284913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:29:08.617918 sshd[6176]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:08.621047 systemd[1]: sshd@15-172.31.25.116:22-4.153.228.146:44832.service: Deactivated successfully. Jan 17 00:29:08.623067 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:29:08.625426 systemd-logind[1955]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:29:08.630382 systemd-logind[1955]: Removed session 16. Jan 17 00:29:08.717885 systemd[1]: Started sshd@16-172.31.25.116:22-4.153.228.146:44846.service - OpenSSH per-connection server daemon (4.153.228.146:44846). Jan 17 00:29:08.833398 containerd[1969]: time="2026-01-17T00:29:08.833266808Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:29:08.835360 containerd[1969]: time="2026-01-17T00:29:08.835308783Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:29:08.835485 containerd[1969]: time="2026-01-17T00:29:08.835325641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:29:08.835725 kubelet[3181]: E0117 00:29:08.835678 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:29:08.835725 kubelet[3181]: E0117 00:29:08.835723 3181 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:29:08.836222 containerd[1969]: time="2026-01-17T00:29:08.836071517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:29:08.836275 kubelet[3181]: E0117 00:29:08.836158 3181 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xlgp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6747446b5-7hcx6_calico-apiserver(cc407ca1-a787-4c80-b23e-a6c88347fad4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:29:08.837564 kubelet[3181]: E0117 00:29:08.837527 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6747446b5-7hcx6" podUID="cc407ca1-a787-4c80-b23e-a6c88347fad4" Jan 17 00:29:09.117631 containerd[1969]: time="2026-01-17T00:29:09.117479080Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:29:09.119640 containerd[1969]: time="2026-01-17T00:29:09.119576338Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:29:09.119815 containerd[1969]: time="2026-01-17T00:29:09.119671326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:29:09.119927 kubelet[3181]: E0117 00:29:09.119871 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:29:09.119927 kubelet[3181]: E0117 00:29:09.119922 3181 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:29:09.120093 kubelet[3181]: E0117 00:29:09.120040 3181 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6sg4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5p9mr_calico-system(c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:29:09.121614 kubelet[3181]: E0117 00:29:09.121534 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5p9mr" podUID="c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9" Jan 17 00:29:09.263252 sshd[6189]: Accepted publickey for core from 4.153.228.146 port 44846 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:29:09.264849 sshd[6189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:09.271280 systemd-logind[1955]: New session 17 of user core. Jan 17 00:29:09.276216 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:29:10.123986 sshd[6189]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:10.133724 systemd[1]: sshd@16-172.31.25.116:22-4.153.228.146:44846.service: Deactivated successfully. Jan 17 00:29:10.136466 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:29:10.137312 systemd-logind[1955]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:29:10.138429 systemd-logind[1955]: Removed session 17. Jan 17 00:29:10.202090 systemd[1]: Started sshd@17-172.31.25.116:22-4.153.228.146:44862.service - OpenSSH per-connection server daemon (4.153.228.146:44862). Jan 17 00:29:10.712408 sshd[6200]: Accepted publickey for core from 4.153.228.146 port 44862 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:29:10.714469 sshd[6200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:10.721331 systemd-logind[1955]: New session 18 of user core. Jan 17 00:29:10.724958 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:29:11.807588 sshd[6200]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:11.821452 systemd-logind[1955]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:29:11.822332 systemd[1]: sshd@17-172.31.25.116:22-4.153.228.146:44862.service: Deactivated successfully. Jan 17 00:29:11.825500 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:29:11.828188 systemd-logind[1955]: Removed session 18. Jan 17 00:29:11.907343 systemd[1]: Started sshd@18-172.31.25.116:22-4.153.228.146:44876.service - OpenSSH per-connection server daemon (4.153.228.146:44876). Jan 17 00:29:12.457550 sshd[6217]: Accepted publickey for core from 4.153.228.146 port 44876 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:29:12.459097 sshd[6217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:12.464386 systemd-logind[1955]: New session 19 of user core. Jan 17 00:29:12.468946 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:29:13.268559 kubelet[3181]: E0117 00:29:13.268204 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6bww6" podUID="394e468b-e5d2-4096-94d5-a6a60d966235" Jan 17 00:29:13.311423 sshd[6217]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:13.315654 systemd[1]: sshd@18-172.31.25.116:22-4.153.228.146:44876.service: Deactivated successfully. Jan 17 00:29:13.318892 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:29:13.320426 systemd-logind[1955]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:29:13.322689 systemd-logind[1955]: Removed session 19. Jan 17 00:29:13.407403 systemd[1]: Started sshd@19-172.31.25.116:22-4.153.228.146:44890.service - OpenSSH per-connection server daemon (4.153.228.146:44890). Jan 17 00:29:13.960997 sshd[6235]: Accepted publickey for core from 4.153.228.146 port 44890 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:29:13.964127 sshd[6235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:13.970792 systemd-logind[1955]: New session 20 of user core. Jan 17 00:29:13.975046 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:29:14.421647 sshd[6235]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:14.427316 systemd[1]: sshd@19-172.31.25.116:22-4.153.228.146:44890.service: Deactivated successfully. Jan 17 00:29:14.431029 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:29:14.432255 systemd-logind[1955]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:29:14.433585 systemd-logind[1955]: Removed session 20. Jan 17 00:29:15.269161 kubelet[3181]: E0117 00:29:15.269095 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6656dcccd5-pnsfx" podUID="76314834-804d-441c-ad9c-ab52475d9d5c" Jan 17 00:29:17.277554 kubelet[3181]: E0117 00:29:17.268553 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-877bf5958-fmwqm" podUID="5fd74b61-87d1-45e4-b949-57645e5eb510" Jan 17 00:29:19.509328 systemd[1]: Started sshd@20-172.31.25.116:22-4.153.228.146:54418.service - OpenSSH per-connection server daemon (4.153.228.146:54418). Jan 17 00:29:20.001457 sshd[6249]: Accepted publickey for core from 4.153.228.146 port 54418 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:29:20.003446 sshd[6249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:20.009602 systemd-logind[1955]: New session 21 of user core. Jan 17 00:29:20.021916 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:29:20.525026 sshd[6249]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:20.531373 systemd[1]: sshd@20-172.31.25.116:22-4.153.228.146:54418.service: Deactivated successfully. Jan 17 00:29:20.538469 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:29:20.541812 systemd-logind[1955]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:29:20.543480 systemd-logind[1955]: Removed session 21. Jan 17 00:29:21.270153 kubelet[3181]: E0117 00:29:21.270101 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6747446b5-k9mxk" podUID="0b97d5ab-19c5-4717-a6ca-1a7a01547f6c" Jan 17 00:29:23.268966 kubelet[3181]: E0117 00:29:23.268618 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6747446b5-7hcx6" podUID="cc407ca1-a787-4c80-b23e-a6c88347fad4" Jan 17 00:29:24.273973 kubelet[3181]: E0117 00:29:24.273877 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5p9mr" podUID="c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9" Jan 17 00:29:25.624959 systemd[1]: Started sshd@21-172.31.25.116:22-4.153.228.146:39352.service - OpenSSH per-connection server daemon (4.153.228.146:39352). Jan 17 00:29:26.190270 sshd[6263]: Accepted publickey for core from 4.153.228.146 port 39352 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:29:26.195613 sshd[6263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:26.204362 systemd-logind[1955]: New session 22 of user core. Jan 17 00:29:26.208997 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:29:26.278714 kubelet[3181]: E0117 00:29:26.278651 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6656dcccd5-pnsfx" podUID="76314834-804d-441c-ad9c-ab52475d9d5c" Jan 17 00:29:26.784067 sshd[6263]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:26.789547 systemd[1]: sshd@21-172.31.25.116:22-4.153.228.146:39352.service: Deactivated successfully. Jan 17 00:29:26.789905 systemd-logind[1955]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:29:26.793993 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:29:26.798103 systemd-logind[1955]: Removed session 22. Jan 17 00:29:28.275435 kubelet[3181]: E0117 00:29:28.275048 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6bww6" podUID="394e468b-e5d2-4096-94d5-a6a60d966235" Jan 17 00:29:31.873117 systemd[1]: Started sshd@22-172.31.25.116:22-4.153.228.146:39368.service - OpenSSH per-connection server daemon (4.153.228.146:39368). Jan 17 00:29:32.295100 kubelet[3181]: E0117 00:29:32.295053 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-877bf5958-fmwqm" podUID="5fd74b61-87d1-45e4-b949-57645e5eb510" Jan 17 00:29:32.412155 sshd[6298]: Accepted publickey for core from 4.153.228.146 port 39368 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:29:32.416167 sshd[6298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:32.422661 systemd-logind[1955]: New session 23 of user core. Jan 17 00:29:32.429196 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 00:29:33.083379 sshd[6298]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:33.088231 systemd[1]: sshd@22-172.31.25.116:22-4.153.228.146:39368.service: Deactivated successfully. Jan 17 00:29:33.093361 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 00:29:33.098338 systemd-logind[1955]: Session 23 logged out. Waiting for processes to exit. Jan 17 00:29:33.100294 systemd-logind[1955]: Removed session 23. Jan 17 00:29:36.390806 kubelet[3181]: E0117 00:29:36.389742 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5p9mr" podUID="c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9" Jan 17 00:29:36.445788 kubelet[3181]: E0117 00:29:36.445283 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6747446b5-k9mxk" podUID="0b97d5ab-19c5-4717-a6ca-1a7a01547f6c" Jan 17 00:29:38.166153 systemd[1]: Started sshd@23-172.31.25.116:22-4.153.228.146:50968.service - OpenSSH per-connection server daemon (4.153.228.146:50968). Jan 17 00:29:38.272701 kubelet[3181]: E0117 00:29:38.272400 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6747446b5-7hcx6" podUID="cc407ca1-a787-4c80-b23e-a6c88347fad4" Jan 17 00:29:38.671463 sshd[6313]: Accepted publickey for core from 4.153.228.146 port 50968 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:29:38.674449 sshd[6313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:38.680402 systemd-logind[1955]: New session 24 of user core. Jan 17 00:29:38.685984 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 00:29:39.170390 sshd[6313]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:39.181217 systemd[1]: sshd@23-172.31.25.116:22-4.153.228.146:50968.service: Deactivated successfully. Jan 17 00:29:39.185910 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 00:29:39.187151 systemd-logind[1955]: Session 24 logged out. Waiting for processes to exit. Jan 17 00:29:39.189055 systemd-logind[1955]: Removed session 24. Jan 17 00:29:39.269338 kubelet[3181]: E0117 00:29:39.269283 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6656dcccd5-pnsfx" podUID="76314834-804d-441c-ad9c-ab52475d9d5c" Jan 17 00:29:41.269710 containerd[1969]: time="2026-01-17T00:29:41.269442669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:29:41.543677 containerd[1969]: time="2026-01-17T00:29:41.543539906Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:29:41.545968 containerd[1969]: time="2026-01-17T00:29:41.545798820Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:29:41.545968 containerd[1969]: time="2026-01-17T00:29:41.545855759Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:29:41.546153 kubelet[3181]: E0117 00:29:41.546035 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:29:41.546153 kubelet[3181]: E0117 00:29:41.546079 3181 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:29:41.546466 kubelet[3181]: E0117 00:29:41.546230 3181 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zc8vm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-6bww6_calico-system(394e468b-e5d2-4096-94d5-a6a60d966235): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:29:41.547720 kubelet[3181]: E0117 00:29:41.547679 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6bww6" podUID="394e468b-e5d2-4096-94d5-a6a60d966235" Jan 17 00:29:44.263980 systemd[1]: Started sshd@24-172.31.25.116:22-4.153.228.146:50976.service - OpenSSH per-connection server daemon (4.153.228.146:50976). Jan 17 00:29:44.756994 sshd[6336]: Accepted publickey for core from 4.153.228.146 port 50976 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:29:44.759725 sshd[6336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:44.767093 systemd-logind[1955]: New session 25 of user core. Jan 17 00:29:44.776597 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 00:29:45.187960 sshd[6336]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:45.192335 systemd[1]: sshd@24-172.31.25.116:22-4.153.228.146:50976.service: Deactivated successfully. Jan 17 00:29:45.195080 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 00:29:45.196380 systemd-logind[1955]: Session 25 logged out. Waiting for processes to exit. Jan 17 00:29:45.197501 systemd-logind[1955]: Removed session 25. Jan 17 00:29:46.273119 containerd[1969]: time="2026-01-17T00:29:46.273019190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:29:46.580010 containerd[1969]: time="2026-01-17T00:29:46.579881521Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:29:46.582147 containerd[1969]: time="2026-01-17T00:29:46.582087905Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:29:46.582294 containerd[1969]: time="2026-01-17T00:29:46.582187718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:29:46.582417 kubelet[3181]: E0117 00:29:46.582368 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:29:46.583378 kubelet[3181]: E0117 00:29:46.582429 3181 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:29:46.583378 kubelet[3181]: E0117 00:29:46.582622 3181 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cnjs6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-877bf5958-fmwqm_calico-system(5fd74b61-87d1-45e4-b949-57645e5eb510): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:29:46.584335 kubelet[3181]: E0117 00:29:46.584281 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-877bf5958-fmwqm" podUID="5fd74b61-87d1-45e4-b949-57645e5eb510" Jan 17 00:29:50.270843 containerd[1969]: time="2026-01-17T00:29:50.269904810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:29:50.550566 containerd[1969]: time="2026-01-17T00:29:50.550429004Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:29:50.552862 containerd[1969]: time="2026-01-17T00:29:50.552812047Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:29:50.552999 containerd[1969]: time="2026-01-17T00:29:50.552824901Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:29:50.553094 kubelet[3181]: E0117 00:29:50.553057 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:29:50.553396 kubelet[3181]: E0117 00:29:50.553110 3181 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:29:50.553396 kubelet[3181]: E0117 00:29:50.553283 3181 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:04b6a3f3f16b4f078d52b8b865750bfc,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fxcqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6656dcccd5-pnsfx_calico-system(76314834-804d-441c-ad9c-ab52475d9d5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:29:50.553569 containerd[1969]: time="2026-01-17T00:29:50.553495682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:29:50.845455 containerd[1969]: time="2026-01-17T00:29:50.845261341Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:29:50.847458 containerd[1969]: time="2026-01-17T00:29:50.847407857Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:29:50.847585 containerd[1969]: time="2026-01-17T00:29:50.847494804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:29:50.847841 kubelet[3181]: E0117 00:29:50.847692 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:29:50.847841 kubelet[3181]: E0117 00:29:50.847767 3181 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:29:50.848895 kubelet[3181]: E0117 00:29:50.848034 3181 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n6f9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6747446b5-k9mxk_calico-apiserver(0b97d5ab-19c5-4717-a6ca-1a7a01547f6c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:29:50.849033 containerd[1969]: time="2026-01-17T00:29:50.848047997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:29:50.849611 kubelet[3181]: E0117 00:29:50.849549 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6747446b5-k9mxk" podUID="0b97d5ab-19c5-4717-a6ca-1a7a01547f6c" Jan 17 00:29:51.112230 containerd[1969]: time="2026-01-17T00:29:51.112080774Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:29:51.114363 containerd[1969]: time="2026-01-17T00:29:51.114309788Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:29:51.115450 containerd[1969]: time="2026-01-17T00:29:51.114344341Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:29:51.116020 kubelet[3181]: E0117 00:29:51.114559 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:29:51.118137 kubelet[3181]: E0117 00:29:51.114605 3181 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:29:51.118137 kubelet[3181]: E0117 00:29:51.117710 3181 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xlgp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6747446b5-7hcx6_calico-apiserver(cc407ca1-a787-4c80-b23e-a6c88347fad4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:29:51.118461 containerd[1969]: time="2026-01-17T00:29:51.117231410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:29:51.119354 kubelet[3181]: E0117 00:29:51.119280 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6747446b5-7hcx6" podUID="cc407ca1-a787-4c80-b23e-a6c88347fad4" Jan 17 00:29:51.405004 containerd[1969]: time="2026-01-17T00:29:51.404892074Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:29:51.408797 containerd[1969]: time="2026-01-17T00:29:51.407408643Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:29:51.408797 containerd[1969]: time="2026-01-17T00:29:51.407464761Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:29:51.409061 kubelet[3181]: E0117 00:29:51.407635 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:29:51.409061 kubelet[3181]: E0117 00:29:51.407690 3181 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:29:51.409061 kubelet[3181]: E0117 00:29:51.407957 3181 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6sg4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5p9mr_calico-system(c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:29:51.409792 containerd[1969]: time="2026-01-17T00:29:51.409660796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:29:51.659686 containerd[1969]: time="2026-01-17T00:29:51.659526298Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:29:51.661899 containerd[1969]: time="2026-01-17T00:29:51.661825234Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:29:51.662204 containerd[1969]: time="2026-01-17T00:29:51.661863834Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:29:51.662261 kubelet[3181]: E0117 00:29:51.662109 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:29:51.662261 kubelet[3181]: E0117 00:29:51.662165 3181 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:29:51.662845 kubelet[3181]: E0117 00:29:51.662444 3181 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fxcqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6656dcccd5-pnsfx_calico-system(76314834-804d-441c-ad9c-ab52475d9d5c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:29:51.663006 containerd[1969]: time="2026-01-17T00:29:51.662721975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:29:51.664541 kubelet[3181]: E0117 00:29:51.664369 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6656dcccd5-pnsfx" podUID="76314834-804d-441c-ad9c-ab52475d9d5c" Jan 17 00:29:51.953044 containerd[1969]: time="2026-01-17T00:29:51.952777054Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:29:51.955096 containerd[1969]: time="2026-01-17T00:29:51.955041693Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:29:51.955096 containerd[1969]: time="2026-01-17T00:29:51.955126474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:29:51.955341 kubelet[3181]: E0117 00:29:51.955251 3181 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:29:51.955341 kubelet[3181]: E0117 00:29:51.955295 3181 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:29:51.955462 kubelet[3181]: E0117 00:29:51.955409 3181 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6sg4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5p9mr_calico-system(c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:29:51.956767 kubelet[3181]: E0117 00:29:51.956605 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5p9mr" podUID="c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9" Jan 17 00:29:54.269521 kubelet[3181]: E0117 00:29:54.269212 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6bww6" podUID="394e468b-e5d2-4096-94d5-a6a60d966235" Jan 17 00:29:59.358188 kubelet[3181]: E0117 00:29:59.351234 3181 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-116?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 17 00:29:59.541254 systemd[1]: cri-containerd-4a12d337d46550598af81ce281f2c351f7d10364b7471d170103133b1801a635.scope: Deactivated successfully. Jan 17 00:29:59.541506 systemd[1]: cri-containerd-4a12d337d46550598af81ce281f2c351f7d10364b7471d170103133b1801a635.scope: Consumed 13.184s CPU time. Jan 17 00:29:59.716580 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a12d337d46550598af81ce281f2c351f7d10364b7471d170103133b1801a635-rootfs.mount: Deactivated successfully. Jan 17 00:29:59.753255 containerd[1969]: time="2026-01-17T00:29:59.742375185Z" level=info msg="shim disconnected" id=4a12d337d46550598af81ce281f2c351f7d10364b7471d170103133b1801a635 namespace=k8s.io Jan 17 00:29:59.756956 containerd[1969]: time="2026-01-17T00:29:59.756808211Z" level=warning msg="cleaning up after shim disconnected" id=4a12d337d46550598af81ce281f2c351f7d10364b7471d170103133b1801a635 namespace=k8s.io Jan 17 00:29:59.756956 containerd[1969]: time="2026-01-17T00:29:59.756850974Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:30:00.173902 kubelet[3181]: I0117 00:30:00.173594 3181 scope.go:117] "RemoveContainer" containerID="4a12d337d46550598af81ce281f2c351f7d10364b7471d170103133b1801a635" Jan 17 00:30:00.271634 kubelet[3181]: E0117 00:30:00.271575 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-877bf5958-fmwqm" podUID="5fd74b61-87d1-45e4-b949-57645e5eb510" Jan 17 00:30:00.328796 containerd[1969]: time="2026-01-17T00:30:00.328657320Z" level=info msg="CreateContainer within sandbox \"f492c753e99e0584be23d6124a15d5c874e795f82ce2e9a82cb0cebbc07744fb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 17 00:30:00.446469 containerd[1969]: time="2026-01-17T00:30:00.446178347Z" level=info msg="CreateContainer within sandbox \"f492c753e99e0584be23d6124a15d5c874e795f82ce2e9a82cb0cebbc07744fb\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"d416edd2808a599d7e5c090eb973e8e44f7ce300201cc1601e6463df7b0281dd\"" Jan 17 00:30:00.453242 containerd[1969]: time="2026-01-17T00:30:00.453201695Z" level=info msg="StartContainer for \"d416edd2808a599d7e5c090eb973e8e44f7ce300201cc1601e6463df7b0281dd\"" Jan 17 00:30:00.493034 systemd[1]: Started cri-containerd-d416edd2808a599d7e5c090eb973e8e44f7ce300201cc1601e6463df7b0281dd.scope - libcontainer container d416edd2808a599d7e5c090eb973e8e44f7ce300201cc1601e6463df7b0281dd. Jan 17 00:30:00.545373 containerd[1969]: time="2026-01-17T00:30:00.545190539Z" level=info msg="StartContainer for \"d416edd2808a599d7e5c090eb973e8e44f7ce300201cc1601e6463df7b0281dd\" returns successfully" Jan 17 00:30:00.747099 systemd[1]: cri-containerd-798bde592c5e02a935432d62396849e65a4dc0c1d86dd49810e07d1fabfcbf31.scope: Deactivated successfully. Jan 17 00:30:00.747881 systemd[1]: cri-containerd-798bde592c5e02a935432d62396849e65a4dc0c1d86dd49810e07d1fabfcbf31.scope: Consumed 4.245s CPU time, 48.6M memory peak, 0B memory swap peak. Jan 17 00:30:00.782198 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-798bde592c5e02a935432d62396849e65a4dc0c1d86dd49810e07d1fabfcbf31-rootfs.mount: Deactivated successfully. Jan 17 00:30:00.799685 containerd[1969]: time="2026-01-17T00:30:00.799622211Z" level=info msg="shim disconnected" id=798bde592c5e02a935432d62396849e65a4dc0c1d86dd49810e07d1fabfcbf31 namespace=k8s.io Jan 17 00:30:00.800163 containerd[1969]: time="2026-01-17T00:30:00.799695809Z" level=warning msg="cleaning up after shim disconnected" id=798bde592c5e02a935432d62396849e65a4dc0c1d86dd49810e07d1fabfcbf31 namespace=k8s.io Jan 17 00:30:00.800163 containerd[1969]: time="2026-01-17T00:30:00.799705779Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:30:01.107911 kubelet[3181]: I0117 00:30:01.107699 3181 scope.go:117] "RemoveContainer" containerID="798bde592c5e02a935432d62396849e65a4dc0c1d86dd49810e07d1fabfcbf31" Jan 17 00:30:01.112002 containerd[1969]: time="2026-01-17T00:30:01.111959145Z" level=info msg="CreateContainer within sandbox \"1f808aa0bea3826d91dc28f4565c816804e9dac54ada39dede12bf7306ae62e9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 17 00:30:01.172694 containerd[1969]: time="2026-01-17T00:30:01.172644653Z" level=info msg="CreateContainer within sandbox \"1f808aa0bea3826d91dc28f4565c816804e9dac54ada39dede12bf7306ae62e9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e3e93b8e87905aa4a42d723b4c353d1e130b3d0f3c36da90d24fbfccbe63617d\"" Jan 17 00:30:01.177526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3911489323.mount: Deactivated successfully. Jan 17 00:30:01.181363 containerd[1969]: time="2026-01-17T00:30:01.178210715Z" level=info msg="StartContainer for \"e3e93b8e87905aa4a42d723b4c353d1e130b3d0f3c36da90d24fbfccbe63617d\"" Jan 17 00:30:01.303152 systemd[1]: Started cri-containerd-e3e93b8e87905aa4a42d723b4c353d1e130b3d0f3c36da90d24fbfccbe63617d.scope - libcontainer container e3e93b8e87905aa4a42d723b4c353d1e130b3d0f3c36da90d24fbfccbe63617d. Jan 17 00:30:01.679441 containerd[1969]: time="2026-01-17T00:30:01.679299765Z" level=info msg="StartContainer for \"e3e93b8e87905aa4a42d723b4c353d1e130b3d0f3c36da90d24fbfccbe63617d\" returns successfully" Jan 17 00:30:02.371884 kubelet[3181]: E0117 00:30:02.358099 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6747446b5-7hcx6" podUID="cc407ca1-a787-4c80-b23e-a6c88347fad4" Jan 17 00:30:02.372928 kubelet[3181]: E0117 00:30:02.372842 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6656dcccd5-pnsfx" podUID="76314834-804d-441c-ad9c-ab52475d9d5c" Jan 17 00:30:04.624011 systemd[1]: cri-containerd-44cad2d84e803d91cbe6309bd19d2c662bd43b28fb81b5368f5f85e783d833df.scope: Deactivated successfully. Jan 17 00:30:04.624345 systemd[1]: cri-containerd-44cad2d84e803d91cbe6309bd19d2c662bd43b28fb81b5368f5f85e783d833df.scope: Consumed 3.197s CPU time, 24.7M memory peak, 0B memory swap peak. Jan 17 00:30:04.715586 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44cad2d84e803d91cbe6309bd19d2c662bd43b28fb81b5368f5f85e783d833df-rootfs.mount: Deactivated successfully. Jan 17 00:30:04.753301 containerd[1969]: time="2026-01-17T00:30:04.753227913Z" level=info msg="shim disconnected" id=44cad2d84e803d91cbe6309bd19d2c662bd43b28fb81b5368f5f85e783d833df namespace=k8s.io Jan 17 00:30:04.753301 containerd[1969]: time="2026-01-17T00:30:04.753297675Z" level=warning msg="cleaning up after shim disconnected" id=44cad2d84e803d91cbe6309bd19d2c662bd43b28fb81b5368f5f85e783d833df namespace=k8s.io Jan 17 00:30:04.753301 containerd[1969]: time="2026-01-17T00:30:04.753308442Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:30:05.175979 kubelet[3181]: I0117 00:30:05.175945 3181 scope.go:117] "RemoveContainer" containerID="44cad2d84e803d91cbe6309bd19d2c662bd43b28fb81b5368f5f85e783d833df" Jan 17 00:30:05.189588 containerd[1969]: time="2026-01-17T00:30:05.189534281Z" level=info msg="CreateContainer within sandbox \"10e17292ba6966b858c637701869e40d50bab4bc2b22d28f4084b892019c0efb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 17 00:30:05.242141 containerd[1969]: time="2026-01-17T00:30:05.242092081Z" level=info msg="CreateContainer within sandbox \"10e17292ba6966b858c637701869e40d50bab4bc2b22d28f4084b892019c0efb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"71d7807c3e9a339403bba3ea315cc2fcfb6661b49c55722148688b367e46fdc7\"" Jan 17 00:30:05.244271 containerd[1969]: time="2026-01-17T00:30:05.244228407Z" level=info msg="StartContainer for \"71d7807c3e9a339403bba3ea315cc2fcfb6661b49c55722148688b367e46fdc7\"" Jan 17 00:30:05.302216 systemd[1]: Started cri-containerd-71d7807c3e9a339403bba3ea315cc2fcfb6661b49c55722148688b367e46fdc7.scope - libcontainer container 71d7807c3e9a339403bba3ea315cc2fcfb6661b49c55722148688b367e46fdc7. Jan 17 00:30:05.381411 containerd[1969]: time="2026-01-17T00:30:05.381355009Z" level=info msg="StartContainer for \"71d7807c3e9a339403bba3ea315cc2fcfb6661b49c55722148688b367e46fdc7\" returns successfully" Jan 17 00:30:06.269993 kubelet[3181]: E0117 00:30:06.269937 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5p9mr" podUID="c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9" Jan 17 00:30:06.272738 kubelet[3181]: E0117 00:30:06.272688 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6747446b5-k9mxk" podUID="0b97d5ab-19c5-4717-a6ca-1a7a01547f6c" Jan 17 00:30:07.268618 kubelet[3181]: E0117 00:30:07.268559 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6bww6" podUID="394e468b-e5d2-4096-94d5-a6a60d966235" Jan 17 00:30:09.359216 kubelet[3181]: E0117 00:30:09.359085 3181 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-116?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 17 00:30:13.943536 systemd[1]: cri-containerd-d416edd2808a599d7e5c090eb973e8e44f7ce300201cc1601e6463df7b0281dd.scope: Deactivated successfully. Jan 17 00:30:13.969781 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d416edd2808a599d7e5c090eb973e8e44f7ce300201cc1601e6463df7b0281dd-rootfs.mount: Deactivated successfully. Jan 17 00:30:13.978321 containerd[1969]: time="2026-01-17T00:30:13.978102002Z" level=info msg="shim disconnected" id=d416edd2808a599d7e5c090eb973e8e44f7ce300201cc1601e6463df7b0281dd namespace=k8s.io Jan 17 00:30:13.978321 containerd[1969]: time="2026-01-17T00:30:13.978156044Z" level=warning msg="cleaning up after shim disconnected" id=d416edd2808a599d7e5c090eb973e8e44f7ce300201cc1601e6463df7b0281dd namespace=k8s.io Jan 17 00:30:13.978321 containerd[1969]: time="2026-01-17T00:30:13.978165236Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:30:14.248363 kubelet[3181]: I0117 00:30:14.248051 3181 scope.go:117] "RemoveContainer" containerID="4a12d337d46550598af81ce281f2c351f7d10364b7471d170103133b1801a635" Jan 17 00:30:14.248863 kubelet[3181]: I0117 00:30:14.248393 3181 scope.go:117] "RemoveContainer" containerID="d416edd2808a599d7e5c090eb973e8e44f7ce300201cc1601e6463df7b0281dd" Jan 17 00:30:14.248863 kubelet[3181]: E0117 00:30:14.248620 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-8bcgz_tigera-operator(b189d04e-c012-4cb4-a30f-abd65ad43060)\"" pod="tigera-operator/tigera-operator-7dcd859c48-8bcgz" podUID="b189d04e-c012-4cb4-a30f-abd65ad43060" Jan 17 00:30:14.271746 kubelet[3181]: E0117 00:30:14.271385 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6747446b5-7hcx6" podUID="cc407ca1-a787-4c80-b23e-a6c88347fad4" Jan 17 00:30:14.273180 containerd[1969]: time="2026-01-17T00:30:14.273138656Z" level=info msg="RemoveContainer for \"4a12d337d46550598af81ce281f2c351f7d10364b7471d170103133b1801a635\"" Jan 17 00:30:14.278958 containerd[1969]: time="2026-01-17T00:30:14.278905408Z" level=info msg="RemoveContainer for \"4a12d337d46550598af81ce281f2c351f7d10364b7471d170103133b1801a635\" returns successfully" Jan 17 00:30:15.268545 kubelet[3181]: E0117 00:30:15.268475 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-877bf5958-fmwqm" podUID="5fd74b61-87d1-45e4-b949-57645e5eb510" Jan 17 00:30:17.268767 kubelet[3181]: E0117 00:30:17.268645 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6747446b5-k9mxk" podUID="0b97d5ab-19c5-4717-a6ca-1a7a01547f6c" Jan 17 00:30:17.269206 kubelet[3181]: E0117 00:30:17.269128 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5p9mr" podUID="c5cbb1a7-a8a6-481d-bf9e-6f05e0da26d9" Jan 17 00:30:17.269798 kubelet[3181]: E0117 00:30:17.269715 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6656dcccd5-pnsfx" podUID="76314834-804d-441c-ad9c-ab52475d9d5c" Jan 17 00:30:19.373924 kubelet[3181]: E0117 00:30:19.373866 3181 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-116?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 17 00:30:21.267956 kubelet[3181]: E0117 00:30:21.267902 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-6bww6" podUID="394e468b-e5d2-4096-94d5-a6a60d966235"