Jan 24 00:37:25.944030 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:37:25.944068 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:37:25.944087 kernel: BIOS-provided physical RAM map: Jan 24 00:37:25.944122 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 24 00:37:25.944133 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jan 24 00:37:25.944144 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Jan 24 00:37:25.944158 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Jan 24 00:37:25.944170 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 24 00:37:25.944183 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 24 00:37:25.944198 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 24 00:37:25.944211 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 24 00:37:25.944222 kernel: NX (Execute Disable) protection: active Jan 24 00:37:25.944234 kernel: APIC: Static calls initialized Jan 24 00:37:25.944246 kernel: efi: EFI v2.7 by EDK II Jan 24 00:37:25.944260 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Jan 24 00:37:25.944274 kernel: SMBIOS 2.7 present. Jan 24 00:37:25.944287 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 24 00:37:25.944299 kernel: Hypervisor detected: KVM Jan 24 00:37:25.944311 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:37:25.944323 kernel: kvm-clock: using sched offset of 4327737964 cycles Jan 24 00:37:25.944336 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:37:25.944349 kernel: tsc: Detected 2499.994 MHz processor Jan 24 00:37:25.944363 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:37:25.944375 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:37:25.944389 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 24 00:37:25.944404 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 24 00:37:25.944417 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:37:25.944429 kernel: Using GB pages for direct mapping Jan 24 00:37:25.944442 kernel: Secure boot disabled Jan 24 00:37:25.944454 kernel: ACPI: Early table checksum verification disabled Jan 24 00:37:25.944467 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jan 24 00:37:25.944480 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jan 24 00:37:25.944492 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 24 00:37:25.944504 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 24 00:37:25.944520 kernel: ACPI: FACS 0x00000000789D0000 000040 Jan 24 00:37:25.944532 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 24 00:37:25.944545 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 24 00:37:25.944557 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 24 00:37:25.944569 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 24 00:37:25.944581 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 24 00:37:25.944599 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 24 00:37:25.944614 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 24 00:37:25.944627 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jan 24 00:37:25.944640 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jan 24 00:37:25.944652 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jan 24 00:37:25.944665 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jan 24 00:37:25.944678 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jan 24 00:37:25.944691 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jan 24 00:37:25.944707 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jan 24 00:37:25.944729 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jan 24 00:37:25.944742 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jan 24 00:37:25.944755 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jan 24 00:37:25.944769 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jan 24 00:37:25.944782 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jan 24 00:37:25.944796 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 24 00:37:25.944809 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 24 00:37:25.944822 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 24 00:37:25.944838 kernel: NUMA: Initialized distance table, cnt=1 Jan 24 00:37:25.944850 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Jan 24 00:37:25.944864 kernel: Zone ranges: Jan 24 00:37:25.944876 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:37:25.944888 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jan 24 00:37:25.944901 kernel: Normal empty Jan 24 00:37:25.944914 kernel: Movable zone start for each node Jan 24 00:37:25.944926 kernel: Early memory node ranges Jan 24 00:37:25.944938 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 24 00:37:25.944954 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jan 24 00:37:25.944966 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jan 24 00:37:25.944979 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jan 24 00:37:25.944991 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:37:25.945004 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 24 00:37:25.945017 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 24 00:37:25.945029 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jan 24 00:37:25.945042 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 24 00:37:25.945055 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:37:25.945068 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 24 00:37:25.945084 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:37:25.945108 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:37:25.945121 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:37:25.945134 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:37:25.945147 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:37:25.945159 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 24 00:37:25.945172 kernel: TSC deadline timer available Jan 24 00:37:25.945185 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 24 00:37:25.945197 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 00:37:25.945213 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jan 24 00:37:25.945226 kernel: Booting paravirtualized kernel on KVM Jan 24 00:37:25.945239 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:37:25.945252 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 24 00:37:25.945265 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 24 00:37:25.945278 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 24 00:37:25.945289 kernel: pcpu-alloc: [0] 0 1 Jan 24 00:37:25.945302 kernel: kvm-guest: PV spinlocks enabled Jan 24 00:37:25.945314 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:37:25.945332 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:37:25.945345 kernel: random: crng init done Jan 24 00:37:25.945358 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 00:37:25.945372 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 24 00:37:25.945385 kernel: Fallback order for Node 0: 0 Jan 24 00:37:25.945398 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Jan 24 00:37:25.945411 kernel: Policy zone: DMA32 Jan 24 00:37:25.945423 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:37:25.945439 kernel: Memory: 1874624K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 162920K reserved, 0K cma-reserved) Jan 24 00:37:25.945452 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 24 00:37:25.945464 kernel: Kernel/User page tables isolation: enabled Jan 24 00:37:25.945476 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:37:25.945489 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:37:25.945501 kernel: Dynamic Preempt: voluntary Jan 24 00:37:25.945513 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:37:25.945527 kernel: rcu: RCU event tracing is enabled. Jan 24 00:37:25.945539 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 24 00:37:25.945556 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:37:25.945569 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:37:25.945582 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:37:25.945595 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:37:25.945608 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 24 00:37:25.945621 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 24 00:37:25.945634 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:37:25.945660 kernel: Console: colour dummy device 80x25 Jan 24 00:37:25.945673 kernel: printk: console [tty0] enabled Jan 24 00:37:25.945686 kernel: printk: console [ttyS0] enabled Jan 24 00:37:25.945699 kernel: ACPI: Core revision 20230628 Jan 24 00:37:25.945713 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 24 00:37:25.945729 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:37:25.945742 kernel: x2apic enabled Jan 24 00:37:25.945756 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:37:25.945770 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Jan 24 00:37:25.945784 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499994) Jan 24 00:37:25.945801 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 24 00:37:25.945815 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 24 00:37:25.945829 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:37:25.945843 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 00:37:25.945857 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:37:25.945871 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 24 00:37:25.945885 kernel: RETBleed: Vulnerable Jan 24 00:37:25.945899 kernel: Speculative Store Bypass: Vulnerable Jan 24 00:37:25.945914 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:37:25.945928 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:37:25.945945 kernel: GDS: Unknown: Dependent on hypervisor status Jan 24 00:37:25.945960 kernel: active return thunk: its_return_thunk Jan 24 00:37:25.945974 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 24 00:37:25.945987 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:37:25.946002 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:37:25.946016 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:37:25.946030 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 24 00:37:25.946045 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 24 00:37:25.946058 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 24 00:37:25.946073 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 24 00:37:25.946087 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 24 00:37:25.946194 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 24 00:37:25.946209 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:37:25.946223 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 24 00:37:25.946237 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 24 00:37:25.946251 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 24 00:37:25.946265 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 24 00:37:25.946279 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 24 00:37:25.946294 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 24 00:37:25.946308 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 24 00:37:25.946323 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:37:25.946337 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:37:25.946354 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:37:25.946368 kernel: landlock: Up and running. Jan 24 00:37:25.946382 kernel: SELinux: Initializing. Jan 24 00:37:25.946395 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 24 00:37:25.946408 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 24 00:37:25.946422 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 24 00:37:25.946451 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:37:25.946466 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:37:25.946482 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:37:25.946497 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 24 00:37:25.946517 kernel: signal: max sigframe size: 3632 Jan 24 00:37:25.946533 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:37:25.946549 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:37:25.946564 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 00:37:25.946580 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:37:25.946596 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:37:25.946611 kernel: .... node #0, CPUs: #1 Jan 24 00:37:25.946627 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 24 00:37:25.946644 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 24 00:37:25.946662 kernel: smp: Brought up 1 node, 2 CPUs Jan 24 00:37:25.946678 kernel: smpboot: Max logical packages: 1 Jan 24 00:37:25.946694 kernel: smpboot: Total of 2 processors activated (9999.97 BogoMIPS) Jan 24 00:37:25.946709 kernel: devtmpfs: initialized Jan 24 00:37:25.946725 kernel: x86/mm: Memory block size: 128MB Jan 24 00:37:25.946741 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jan 24 00:37:25.946756 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:37:25.946772 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 24 00:37:25.946788 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:37:25.946807 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:37:25.946823 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:37:25.946839 kernel: audit: type=2000 audit(1769215046.237:1): state=initialized audit_enabled=0 res=1 Jan 24 00:37:25.946854 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:37:25.946869 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:37:25.946885 kernel: cpuidle: using governor menu Jan 24 00:37:25.946901 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:37:25.946916 kernel: dca service started, version 1.12.1 Jan 24 00:37:25.946932 kernel: PCI: Using configuration type 1 for base access Jan 24 00:37:25.946950 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:37:25.946966 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:37:25.946982 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:37:25.946997 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:37:25.947013 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:37:25.947029 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:37:25.947044 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:37:25.947060 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:37:25.947075 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 24 00:37:25.947094 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:37:25.947144 kernel: ACPI: Interpreter enabled Jan 24 00:37:25.947156 kernel: ACPI: PM: (supports S0 S5) Jan 24 00:37:25.947168 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:37:25.947182 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:37:25.947196 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 00:37:25.947209 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 24 00:37:25.947225 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:37:25.947467 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:37:25.947635 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 24 00:37:25.947776 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 24 00:37:25.947794 kernel: acpiphp: Slot [3] registered Jan 24 00:37:25.947808 kernel: acpiphp: Slot [4] registered Jan 24 00:37:25.947823 kernel: acpiphp: Slot [5] registered Jan 24 00:37:25.947838 kernel: acpiphp: Slot [6] registered Jan 24 00:37:25.947853 kernel: acpiphp: Slot [7] registered Jan 24 00:37:25.947872 kernel: acpiphp: Slot [8] registered Jan 24 00:37:25.947887 kernel: acpiphp: Slot [9] registered Jan 24 00:37:25.947905 kernel: acpiphp: Slot [10] registered Jan 24 00:37:25.947920 kernel: acpiphp: Slot [11] registered Jan 24 00:37:25.947934 kernel: acpiphp: Slot [12] registered Jan 24 00:37:25.947950 kernel: acpiphp: Slot [13] registered Jan 24 00:37:25.947964 kernel: acpiphp: Slot [14] registered Jan 24 00:37:25.947977 kernel: acpiphp: Slot [15] registered Jan 24 00:37:25.947991 kernel: acpiphp: Slot [16] registered Jan 24 00:37:25.948007 kernel: acpiphp: Slot [17] registered Jan 24 00:37:25.948024 kernel: acpiphp: Slot [18] registered Jan 24 00:37:25.948037 kernel: acpiphp: Slot [19] registered Jan 24 00:37:25.948051 kernel: acpiphp: Slot [20] registered Jan 24 00:37:25.948065 kernel: acpiphp: Slot [21] registered Jan 24 00:37:25.948079 kernel: acpiphp: Slot [22] registered Jan 24 00:37:25.948093 kernel: acpiphp: Slot [23] registered Jan 24 00:37:25.948142 kernel: acpiphp: Slot [24] registered Jan 24 00:37:25.948156 kernel: acpiphp: Slot [25] registered Jan 24 00:37:25.948171 kernel: acpiphp: Slot [26] registered Jan 24 00:37:25.948190 kernel: acpiphp: Slot [27] registered Jan 24 00:37:25.948205 kernel: acpiphp: Slot [28] registered Jan 24 00:37:25.948220 kernel: acpiphp: Slot [29] registered Jan 24 00:37:25.948236 kernel: acpiphp: Slot [30] registered Jan 24 00:37:25.948251 kernel: acpiphp: Slot [31] registered Jan 24 00:37:25.948266 kernel: PCI host bridge to bus 0000:00 Jan 24 00:37:25.948436 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:37:25.948560 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:37:25.948689 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:37:25.948818 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 24 00:37:25.948932 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jan 24 00:37:25.949044 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:37:25.949215 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 24 00:37:25.949362 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 24 00:37:25.949497 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 24 00:37:25.949629 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 24 00:37:25.949756 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 24 00:37:25.949881 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 24 00:37:25.950007 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 24 00:37:25.950162 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 24 00:37:25.950291 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 24 00:37:25.950421 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 24 00:37:25.950579 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 24 00:37:25.950736 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Jan 24 00:37:25.950869 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 24 00:37:25.950999 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Jan 24 00:37:25.951152 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 00:37:25.951294 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 24 00:37:25.951434 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Jan 24 00:37:25.951576 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 24 00:37:25.951709 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Jan 24 00:37:25.951730 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:37:25.951747 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:37:25.951763 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:37:25.951779 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:37:25.951795 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 24 00:37:25.951815 kernel: iommu: Default domain type: Translated Jan 24 00:37:25.951831 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:37:25.951847 kernel: efivars: Registered efivars operations Jan 24 00:37:25.951863 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:37:25.951879 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:37:25.951896 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jan 24 00:37:25.951912 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jan 24 00:37:25.952042 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 24 00:37:25.953199 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 24 00:37:25.953355 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 00:37:25.953375 kernel: vgaarb: loaded Jan 24 00:37:25.953390 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 24 00:37:25.953404 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 24 00:37:25.953420 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:37:25.953435 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:37:25.953448 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:37:25.953462 kernel: pnp: PnP ACPI init Jan 24 00:37:25.953480 kernel: pnp: PnP ACPI: found 5 devices Jan 24 00:37:25.953495 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:37:25.953509 kernel: NET: Registered PF_INET protocol family Jan 24 00:37:25.953524 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 00:37:25.953539 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 24 00:37:25.953553 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:37:25.953567 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 24 00:37:25.953582 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 24 00:37:25.953596 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 24 00:37:25.953613 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 24 00:37:25.953628 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 24 00:37:25.953642 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:37:25.953657 kernel: NET: Registered PF_XDP protocol family Jan 24 00:37:25.953779 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:37:25.953900 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:37:25.954015 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:37:25.954143 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 24 00:37:25.954259 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jan 24 00:37:25.954401 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 24 00:37:25.954420 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:37:25.954435 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 24 00:37:25.954450 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Jan 24 00:37:25.954464 kernel: clocksource: Switched to clocksource tsc Jan 24 00:37:25.954479 kernel: Initialise system trusted keyrings Jan 24 00:37:25.954493 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 24 00:37:25.954508 kernel: Key type asymmetric registered Jan 24 00:37:25.954525 kernel: Asymmetric key parser 'x509' registered Jan 24 00:37:25.954539 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:37:25.954553 kernel: io scheduler mq-deadline registered Jan 24 00:37:25.954568 kernel: io scheduler kyber registered Jan 24 00:37:25.954582 kernel: io scheduler bfq registered Jan 24 00:37:25.954597 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:37:25.954611 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:37:25.954626 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:37:25.954641 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:37:25.954658 kernel: i8042: Warning: Keylock active Jan 24 00:37:25.954672 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:37:25.954687 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:37:25.954829 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 24 00:37:25.957257 kernel: rtc_cmos 00:00: registered as rtc0 Jan 24 00:37:25.957407 kernel: rtc_cmos 00:00: setting system clock to 2026-01-24T00:37:25 UTC (1769215045) Jan 24 00:37:25.957533 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 24 00:37:25.957552 kernel: intel_pstate: CPU model not supported Jan 24 00:37:25.957572 kernel: efifb: probing for efifb Jan 24 00:37:25.957587 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Jan 24 00:37:25.957601 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jan 24 00:37:25.957616 kernel: efifb: scrolling: redraw Jan 24 00:37:25.957631 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 24 00:37:25.957645 kernel: Console: switching to colour frame buffer device 100x37 Jan 24 00:37:25.957659 kernel: fb0: EFI VGA frame buffer device Jan 24 00:37:25.957674 kernel: pstore: Using crash dump compression: deflate Jan 24 00:37:25.957689 kernel: pstore: Registered efi_pstore as persistent store backend Jan 24 00:37:25.957706 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:37:25.957720 kernel: Segment Routing with IPv6 Jan 24 00:37:25.957734 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:37:25.957748 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:37:25.957762 kernel: Key type dns_resolver registered Jan 24 00:37:25.957776 kernel: IPI shorthand broadcast: enabled Jan 24 00:37:25.957815 kernel: sched_clock: Marking stable (506002478, 164074581)->(766949234, -96872175) Jan 24 00:37:25.957832 kernel: registered taskstats version 1 Jan 24 00:37:25.957847 kernel: Loading compiled-in X.509 certificates Jan 24 00:37:25.957864 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:37:25.957879 kernel: Key type .fscrypt registered Jan 24 00:37:25.957893 kernel: Key type fscrypt-provisioning registered Jan 24 00:37:25.957908 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:37:25.957923 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:37:25.957939 kernel: ima: No architecture policies found Jan 24 00:37:25.957953 kernel: clk: Disabling unused clocks Jan 24 00:37:25.957968 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:37:25.957983 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:37:25.958001 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:37:25.958015 kernel: Run /init as init process Jan 24 00:37:25.958029 kernel: with arguments: Jan 24 00:37:25.958044 kernel: /init Jan 24 00:37:25.958058 kernel: with environment: Jan 24 00:37:25.958072 kernel: HOME=/ Jan 24 00:37:25.958087 kernel: TERM=linux Jan 24 00:37:25.958115 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:37:25.958136 systemd[1]: Detected virtualization amazon. Jan 24 00:37:25.958155 systemd[1]: Detected architecture x86-64. Jan 24 00:37:25.958170 systemd[1]: Running in initrd. Jan 24 00:37:25.958184 systemd[1]: No hostname configured, using default hostname. Jan 24 00:37:25.958198 systemd[1]: Hostname set to . Jan 24 00:37:25.958214 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:37:25.958229 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:37:25.958244 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:37:25.958262 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:37:25.958279 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:37:25.958294 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:37:25.958310 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:37:25.958328 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:37:25.958349 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:37:25.958365 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:37:25.958380 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:37:25.958395 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:37:25.958410 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:37:25.958426 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:37:25.958441 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:37:25.958460 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:37:25.958475 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:37:25.958490 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:37:25.958505 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:37:25.958521 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:37:25.958536 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:37:25.958551 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:37:25.958566 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:37:25.958581 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:37:25.958600 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:37:25.958615 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:37:25.958630 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:37:25.958645 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:37:25.958660 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:37:25.958674 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:37:25.958688 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:37:25.958732 systemd-journald[179]: Collecting audit messages is disabled. Jan 24 00:37:25.958770 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:37:25.958785 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:37:25.958802 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:37:25.958824 systemd-journald[179]: Journal started Jan 24 00:37:25.958856 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2fd33f231f14cbdd536edcaf442620) is 4.7M, max 38.2M, 33.4M free. Jan 24 00:37:25.958701 systemd-modules-load[180]: Inserted module 'overlay' Jan 24 00:37:25.965685 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:37:25.976336 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:37:25.989456 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:37:25.994079 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:37:26.010122 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 00:37:26.010196 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:37:26.013617 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:37:26.017339 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:37:26.020864 kernel: Bridge firewalling registered Jan 24 00:37:26.020122 systemd-modules-load[180]: Inserted module 'br_netfilter' Jan 24 00:37:26.021394 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:37:26.023524 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:37:26.031372 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:37:26.037784 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:37:26.043834 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:37:26.048510 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:37:26.053597 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:37:26.060392 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:37:26.063605 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:37:26.082887 dracut-cmdline[210]: dracut-dracut-053 Jan 24 00:37:26.087537 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:37:26.109249 systemd-resolved[214]: Positive Trust Anchors: Jan 24 00:37:26.110218 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:37:26.110279 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:37:26.117049 systemd-resolved[214]: Defaulting to hostname 'linux'. Jan 24 00:37:26.120586 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:37:26.121947 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:37:26.175137 kernel: SCSI subsystem initialized Jan 24 00:37:26.185128 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:37:26.197139 kernel: iscsi: registered transport (tcp) Jan 24 00:37:26.219792 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:37:26.219876 kernel: QLogic iSCSI HBA Driver Jan 24 00:37:26.259757 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:37:26.265341 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:37:26.291133 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:37:26.291207 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:37:26.294466 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:37:26.336133 kernel: raid6: avx512x4 gen() 18082 MB/s Jan 24 00:37:26.354143 kernel: raid6: avx512x2 gen() 17854 MB/s Jan 24 00:37:26.372131 kernel: raid6: avx512x1 gen() 17862 MB/s Jan 24 00:37:26.390125 kernel: raid6: avx2x4 gen() 17793 MB/s Jan 24 00:37:26.408130 kernel: raid6: avx2x2 gen() 17791 MB/s Jan 24 00:37:26.426473 kernel: raid6: avx2x1 gen() 13886 MB/s Jan 24 00:37:26.426520 kernel: raid6: using algorithm avx512x4 gen() 18082 MB/s Jan 24 00:37:26.445743 kernel: raid6: .... xor() 7574 MB/s, rmw enabled Jan 24 00:37:26.445789 kernel: raid6: using avx512x2 recovery algorithm Jan 24 00:37:26.468146 kernel: xor: automatically using best checksumming function avx Jan 24 00:37:26.627137 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:37:26.637764 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:37:26.642340 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:37:26.669819 systemd-udevd[398]: Using default interface naming scheme 'v255'. Jan 24 00:37:26.675089 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:37:26.685363 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:37:26.703294 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Jan 24 00:37:26.734920 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:37:26.739376 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:37:26.793304 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:37:26.804751 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:37:26.824654 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:37:26.827902 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:37:26.828804 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:37:26.830216 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:37:26.837340 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:37:26.864054 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:37:26.903123 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:37:26.930773 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:37:26.930858 kernel: AES CTR mode by8 optimization enabled Jan 24 00:37:26.932495 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:37:26.939753 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 24 00:37:26.940007 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 24 00:37:26.932757 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:37:26.939159 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:37:26.943002 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:37:26.943366 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:37:26.945351 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:37:26.952135 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 24 00:37:26.952421 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 24 00:37:26.952449 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 24 00:37:26.960888 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:3d:95:3a:30:3d Jan 24 00:37:26.963303 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:37:26.965194 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 24 00:37:26.968331 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:37:26.968469 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:37:26.978656 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:37:26.978731 kernel: GPT:9289727 != 33554431 Jan 24 00:37:26.978753 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:37:26.978774 kernel: GPT:9289727 != 33554431 Jan 24 00:37:26.980062 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:37:26.980137 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 24 00:37:26.982025 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:37:26.985688 (udev-worker)[450]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:37:27.010981 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:37:27.021304 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:37:27.047428 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:37:27.110221 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (455) Jan 24 00:37:27.135116 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 24 00:37:27.139915 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/nvme0n1p3 scanned by (udev-worker) (450) Jan 24 00:37:27.192553 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 24 00:37:27.206263 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 24 00:37:27.206852 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 24 00:37:27.214353 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 24 00:37:27.221324 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:37:27.228834 disk-uuid[631]: Primary Header is updated. Jan 24 00:37:27.228834 disk-uuid[631]: Secondary Entries is updated. Jan 24 00:37:27.228834 disk-uuid[631]: Secondary Header is updated. Jan 24 00:37:27.239141 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 24 00:37:27.244121 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 24 00:37:28.255136 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 24 00:37:28.255200 disk-uuid[632]: The operation has completed successfully. Jan 24 00:37:28.373413 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:37:28.373525 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:37:28.394331 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:37:28.399594 sh[977]: Success Jan 24 00:37:28.421404 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 24 00:37:28.521634 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:37:28.529223 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:37:28.530685 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:37:28.571946 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:37:28.572023 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:37:28.572045 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:37:28.576027 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:37:28.576094 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:37:28.648145 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 24 00:37:28.682242 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:37:28.683312 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:37:28.694337 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:37:28.698294 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:37:28.723138 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:37:28.723213 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:37:28.725620 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 24 00:37:28.735124 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 24 00:37:28.748363 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 00:37:28.753328 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:37:28.760184 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:37:28.770680 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:37:28.808645 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:37:28.818406 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:37:28.840200 systemd-networkd[1169]: lo: Link UP Jan 24 00:37:28.840211 systemd-networkd[1169]: lo: Gained carrier Jan 24 00:37:28.841945 systemd-networkd[1169]: Enumeration completed Jan 24 00:37:28.842433 systemd-networkd[1169]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:37:28.842438 systemd-networkd[1169]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:37:28.843576 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:37:28.845690 systemd[1]: Reached target network.target - Network. Jan 24 00:37:28.846605 systemd-networkd[1169]: eth0: Link UP Jan 24 00:37:28.846611 systemd-networkd[1169]: eth0: Gained carrier Jan 24 00:37:28.846623 systemd-networkd[1169]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:37:28.864867 systemd-networkd[1169]: eth0: DHCPv4 address 172.31.23.37/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 24 00:37:29.052157 ignition[1112]: Ignition 2.19.0 Jan 24 00:37:29.052167 ignition[1112]: Stage: fetch-offline Jan 24 00:37:29.052365 ignition[1112]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:37:29.052374 ignition[1112]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:37:29.055351 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:37:29.053418 ignition[1112]: Ignition finished successfully Jan 24 00:37:29.061336 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 24 00:37:29.077369 ignition[1178]: Ignition 2.19.0 Jan 24 00:37:29.077383 ignition[1178]: Stage: fetch Jan 24 00:37:29.077842 ignition[1178]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:37:29.077855 ignition[1178]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:37:29.077979 ignition[1178]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:37:29.099843 ignition[1178]: PUT result: OK Jan 24 00:37:29.102374 ignition[1178]: parsed url from cmdline: "" Jan 24 00:37:29.102386 ignition[1178]: no config URL provided Jan 24 00:37:29.102397 ignition[1178]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:37:29.102414 ignition[1178]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:37:29.102465 ignition[1178]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:37:29.104115 ignition[1178]: PUT result: OK Jan 24 00:37:29.104920 ignition[1178]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 24 00:37:29.106283 ignition[1178]: GET result: OK Jan 24 00:37:29.106383 ignition[1178]: parsing config with SHA512: b3d37a94c2d767afbeb3b93b00e3b29734e6ce56d75659a7399c04dec0202e53188631d9492002afc2293e123772f72a72e328bd451d9ebbb38ec13fab8d2d2b Jan 24 00:37:29.113547 unknown[1178]: fetched base config from "system" Jan 24 00:37:29.113559 unknown[1178]: fetched base config from "system" Jan 24 00:37:29.114175 ignition[1178]: fetch: fetch complete Jan 24 00:37:29.113568 unknown[1178]: fetched user config from "aws" Jan 24 00:37:29.114182 ignition[1178]: fetch: fetch passed Jan 24 00:37:29.114247 ignition[1178]: Ignition finished successfully Jan 24 00:37:29.117280 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 24 00:37:29.122370 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:37:29.138814 ignition[1185]: Ignition 2.19.0 Jan 24 00:37:29.138828 ignition[1185]: Stage: kargs Jan 24 00:37:29.139317 ignition[1185]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:37:29.139331 ignition[1185]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:37:29.139451 ignition[1185]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:37:29.140250 ignition[1185]: PUT result: OK Jan 24 00:37:29.143170 ignition[1185]: kargs: kargs passed Jan 24 00:37:29.143244 ignition[1185]: Ignition finished successfully Jan 24 00:37:29.145053 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:37:29.149326 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:37:29.167653 ignition[1191]: Ignition 2.19.0 Jan 24 00:37:29.167668 ignition[1191]: Stage: disks Jan 24 00:37:29.168181 ignition[1191]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:37:29.168196 ignition[1191]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:37:29.168322 ignition[1191]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:37:29.169255 ignition[1191]: PUT result: OK Jan 24 00:37:29.171742 ignition[1191]: disks: disks passed Jan 24 00:37:29.171829 ignition[1191]: Ignition finished successfully Jan 24 00:37:29.173331 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:37:29.174336 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:37:29.174964 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:37:29.175392 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:37:29.175937 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:37:29.176519 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:37:29.184353 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:37:29.220499 systemd-fsck[1200]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 24 00:37:29.223411 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:37:29.228212 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:37:29.334217 kernel: EXT4-fs (nvme0n1p9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:37:29.334576 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:37:29.335705 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:37:29.342233 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:37:29.346228 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:37:29.347012 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 00:37:29.347058 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:37:29.347094 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:37:29.361822 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:37:29.367255 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:37:29.369096 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1219) Jan 24 00:37:29.370117 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:37:29.370141 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:37:29.370155 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 24 00:37:29.386125 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 24 00:37:29.388302 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:37:29.711607 initrd-setup-root[1243]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:37:29.742581 initrd-setup-root[1250]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:37:29.748267 initrd-setup-root[1257]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:37:29.753875 initrd-setup-root[1264]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:37:30.011373 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:37:30.024247 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:37:30.026263 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:37:30.034974 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:37:30.035573 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:37:30.064400 ignition[1331]: INFO : Ignition 2.19.0 Jan 24 00:37:30.065526 ignition[1331]: INFO : Stage: mount Jan 24 00:37:30.066882 ignition[1331]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:37:30.066882 ignition[1331]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:37:30.066882 ignition[1331]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:37:30.069013 ignition[1331]: INFO : PUT result: OK Jan 24 00:37:30.072548 ignition[1331]: INFO : mount: mount passed Jan 24 00:37:30.074140 ignition[1331]: INFO : Ignition finished successfully Jan 24 00:37:30.076218 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:37:30.082239 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:37:30.086263 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:37:30.100401 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:37:30.122132 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1345) Jan 24 00:37:30.126462 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:37:30.126538 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:37:30.126553 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 24 00:37:30.134125 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 24 00:37:30.136866 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:37:30.159884 ignition[1361]: INFO : Ignition 2.19.0 Jan 24 00:37:30.159884 ignition[1361]: INFO : Stage: files Jan 24 00:37:30.159884 ignition[1361]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:37:30.159884 ignition[1361]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:37:30.159884 ignition[1361]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:37:30.162476 ignition[1361]: INFO : PUT result: OK Jan 24 00:37:30.163892 ignition[1361]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:37:30.165094 ignition[1361]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:37:30.165094 ignition[1361]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:37:30.182021 ignition[1361]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:37:30.182803 ignition[1361]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:37:30.182803 ignition[1361]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:37:30.182534 unknown[1361]: wrote ssh authorized keys file for user: core Jan 24 00:37:30.185981 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:37:30.186685 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 24 00:37:30.266975 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 00:37:30.448047 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:37:30.448047 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:37:30.450191 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:37:30.450191 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:37:30.450191 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:37:30.450191 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:37:30.450191 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:37:30.450191 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:37:30.450191 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:37:30.450191 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:37:30.450191 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:37:30.450191 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:37:30.450191 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:37:30.450191 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:37:30.450191 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 24 00:37:30.649262 systemd-networkd[1169]: eth0: Gained IPv6LL Jan 24 00:37:30.917397 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 24 00:37:31.522362 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:37:31.522362 ignition[1361]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 24 00:37:31.536059 ignition[1361]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:37:31.537351 ignition[1361]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:37:31.537351 ignition[1361]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 24 00:37:31.537351 ignition[1361]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:37:31.537351 ignition[1361]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:37:31.537351 ignition[1361]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:37:31.537351 ignition[1361]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:37:31.537351 ignition[1361]: INFO : files: files passed Jan 24 00:37:31.537351 ignition[1361]: INFO : Ignition finished successfully Jan 24 00:37:31.538111 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:37:31.544316 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:37:31.547289 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:37:31.550909 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:37:31.550998 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:37:31.561629 initrd-setup-root-after-ignition[1391]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:37:31.561629 initrd-setup-root-after-ignition[1391]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:37:31.564399 initrd-setup-root-after-ignition[1395]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:37:31.566809 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:37:31.567796 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:37:31.579396 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:37:31.606259 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:37:31.606406 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:37:31.607669 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:37:31.608963 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:37:31.609839 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:37:31.616313 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:37:31.629546 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:37:31.636327 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:37:31.649065 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:37:31.649884 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:37:31.650884 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:37:31.651748 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:37:31.651931 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:37:31.653217 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:37:31.654059 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:37:31.654866 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:37:31.655656 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:37:31.656455 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:37:31.657383 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:37:31.658178 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:37:31.658980 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:37:31.660190 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:37:31.661038 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:37:31.661787 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:37:31.661966 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:37:31.663077 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:37:31.663895 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:37:31.664593 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:37:31.665496 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:37:31.666761 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:37:31.666946 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:37:31.668148 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:37:31.668338 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:37:31.669279 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:37:31.669459 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:37:31.675364 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:37:31.679446 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:37:31.680048 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:37:31.680319 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:37:31.683201 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:37:31.683410 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:37:31.695668 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:37:31.695798 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:37:31.707251 ignition[1415]: INFO : Ignition 2.19.0 Jan 24 00:37:31.707251 ignition[1415]: INFO : Stage: umount Jan 24 00:37:31.710222 ignition[1415]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:37:31.710222 ignition[1415]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:37:31.710222 ignition[1415]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:37:31.710222 ignition[1415]: INFO : PUT result: OK Jan 24 00:37:31.714285 ignition[1415]: INFO : umount: umount passed Jan 24 00:37:31.714999 ignition[1415]: INFO : Ignition finished successfully Jan 24 00:37:31.716300 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:37:31.717118 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:37:31.718872 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:37:31.718998 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:37:31.719634 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:37:31.719691 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:37:31.720549 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 24 00:37:31.720612 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 24 00:37:31.721199 systemd[1]: Stopped target network.target - Network. Jan 24 00:37:31.721773 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:37:31.721845 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:37:31.722490 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:37:31.723196 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:37:31.723682 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:37:31.724258 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:37:31.726174 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:37:31.727937 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:37:31.727997 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:37:31.728658 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:37:31.728828 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:37:31.729393 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:37:31.729461 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:37:31.730051 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:37:31.730180 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:37:31.731325 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:37:31.731977 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:37:31.734247 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:37:31.735042 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:37:31.735477 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:37:31.736168 systemd-networkd[1169]: eth0: DHCPv6 lease lost Jan 24 00:37:31.738399 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:37:31.738503 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:37:31.739708 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:37:31.739848 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:37:31.743076 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:37:31.743754 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:37:31.748373 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:37:31.749123 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:37:31.749210 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:37:31.750297 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:37:31.757530 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:37:31.757673 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:37:31.761226 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:37:31.761426 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:37:31.769623 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:37:31.769716 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:37:31.770906 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:37:31.770955 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:37:31.771756 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:37:31.771824 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:37:31.773864 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:37:31.773939 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:37:31.774921 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:37:31.774992 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:37:31.782284 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:37:31.783661 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:37:31.783749 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:37:31.785247 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:37:31.785320 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:37:31.786795 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:37:31.786855 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:37:31.787466 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 24 00:37:31.787526 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:37:31.790141 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:37:31.790204 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:37:31.791220 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:37:31.791281 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:37:31.791967 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:37:31.792023 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:37:31.793601 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:37:31.793725 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:37:31.794876 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:37:31.794991 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:37:31.796505 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:37:31.802410 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:37:31.812576 systemd[1]: Switching root. Jan 24 00:37:31.856697 systemd-journald[179]: Journal stopped Jan 24 00:37:34.325889 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Jan 24 00:37:34.325988 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:37:34.326014 kernel: SELinux: policy capability open_perms=1 Jan 24 00:37:34.326041 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:37:34.326068 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:37:34.326093 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:37:34.326140 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:37:34.326158 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:37:34.326176 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:37:34.326194 kernel: audit: type=1403 audit(1769215053.023:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 00:37:34.326220 systemd[1]: Successfully loaded SELinux policy in 42.995ms. Jan 24 00:37:34.326254 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.437ms. Jan 24 00:37:34.326277 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:37:34.326302 systemd[1]: Detected virtualization amazon. Jan 24 00:37:34.326323 systemd[1]: Detected architecture x86-64. Jan 24 00:37:34.326344 systemd[1]: Detected first boot. Jan 24 00:37:34.326369 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:37:34.326390 zram_generator::config[1458]: No configuration found. Jan 24 00:37:34.326412 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:37:34.326433 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 00:37:34.326454 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 00:37:34.326475 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 00:37:34.326501 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:37:34.326522 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:37:34.326543 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:37:34.326564 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:37:34.326586 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:37:34.326607 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:37:34.326628 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:37:34.326649 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:37:34.326673 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:37:34.326694 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:37:34.326716 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:37:34.326735 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:37:34.326754 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:37:34.326775 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:37:34.326796 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:37:34.326819 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:37:34.326842 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 00:37:34.326868 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 00:37:34.326891 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 00:37:34.326914 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:37:34.326935 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:37:34.326958 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:37:34.326980 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:37:34.327003 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:37:34.327026 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:37:34.327059 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:37:34.327082 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:37:34.328176 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:37:34.328211 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:37:34.328233 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:37:34.328255 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:37:34.328277 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:37:34.328314 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:37:34.328336 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:37:34.328364 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:37:34.328383 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:37:34.328403 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:37:34.328424 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:37:34.328445 systemd[1]: Reached target machines.target - Containers. Jan 24 00:37:34.328464 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:37:34.328484 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:37:34.328504 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:37:34.328527 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:37:34.328546 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:37:34.328567 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:37:34.328586 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:37:34.328605 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:37:34.328624 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:37:34.328643 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:37:34.328663 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 00:37:34.328683 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 00:37:34.328717 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 00:37:34.328737 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 00:37:34.328757 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:37:34.328776 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:37:34.328796 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:37:34.328814 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:37:34.328833 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:37:34.328853 systemd[1]: verity-setup.service: Deactivated successfully. Jan 24 00:37:34.328871 systemd[1]: Stopped verity-setup.service. Jan 24 00:37:34.328894 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:37:34.328914 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:37:34.328934 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:37:34.328952 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:37:34.328972 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:37:34.328992 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:37:34.329012 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:37:34.329037 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:37:34.329057 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:37:34.329076 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:37:34.329095 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:37:34.329148 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:37:34.329167 kernel: fuse: init (API version 7.39) Jan 24 00:37:34.329193 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:37:34.329212 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:37:34.329233 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:37:34.329257 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:37:34.329278 kernel: loop: module loaded Jan 24 00:37:34.329339 systemd-journald[1536]: Collecting audit messages is disabled. Jan 24 00:37:34.329385 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:37:34.329408 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:37:34.329430 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:37:34.329453 systemd-journald[1536]: Journal started Jan 24 00:37:34.329496 systemd-journald[1536]: Runtime Journal (/run/log/journal/ec2fd33f231f14cbdd536edcaf442620) is 4.7M, max 38.2M, 33.4M free. Jan 24 00:37:33.925355 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:37:33.981293 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 24 00:37:34.331928 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:37:33.981728 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 00:37:34.336148 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:37:34.339178 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:37:34.363355 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:37:34.377931 kernel: ACPI: bus type drm_connector registered Jan 24 00:37:34.377238 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:37:34.385253 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:37:34.385934 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:37:34.385994 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:37:34.390613 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 24 00:37:34.408138 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:37:34.415416 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:37:34.418349 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:37:34.422336 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:37:34.424446 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:37:34.426210 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:37:34.433323 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:37:34.434227 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:37:34.444414 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:37:34.448296 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:37:34.453317 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:37:34.461253 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:37:34.462384 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:37:34.463404 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:37:34.465628 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:37:34.467509 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:37:34.469635 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:37:34.488649 systemd-journald[1536]: Time spent on flushing to /var/log/journal/ec2fd33f231f14cbdd536edcaf442620 is 120.744ms for 983 entries. Jan 24 00:37:34.488649 systemd-journald[1536]: System Journal (/var/log/journal/ec2fd33f231f14cbdd536edcaf442620) is 8.0M, max 195.6M, 187.6M free. Jan 24 00:37:34.646484 systemd-journald[1536]: Received client request to flush runtime journal. Jan 24 00:37:34.646563 kernel: loop0: detected capacity change from 0 to 140768 Jan 24 00:37:34.492074 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:37:34.500595 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:37:34.508567 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 24 00:37:34.510427 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:37:34.529590 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:37:34.541678 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 24 00:37:34.560062 udevadm[1599]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 24 00:37:34.609579 systemd-tmpfiles[1586]: ACLs are not supported, ignoring. Jan 24 00:37:34.609604 systemd-tmpfiles[1586]: ACLs are not supported, ignoring. Jan 24 00:37:34.621254 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:37:34.632653 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:37:34.652719 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:37:34.663991 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:37:34.670028 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 24 00:37:34.727137 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:37:34.743145 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:37:34.751199 kernel: loop1: detected capacity change from 0 to 224512 Jan 24 00:37:34.757359 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:37:34.780238 systemd-tmpfiles[1611]: ACLs are not supported, ignoring. Jan 24 00:37:34.780271 systemd-tmpfiles[1611]: ACLs are not supported, ignoring. Jan 24 00:37:34.788493 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:37:34.882131 kernel: loop2: detected capacity change from 0 to 61336 Jan 24 00:37:34.980247 kernel: loop3: detected capacity change from 0 to 142488 Jan 24 00:37:35.083132 kernel: loop4: detected capacity change from 0 to 140768 Jan 24 00:37:35.133138 kernel: loop5: detected capacity change from 0 to 224512 Jan 24 00:37:35.175138 kernel: loop6: detected capacity change from 0 to 61336 Jan 24 00:37:35.205136 kernel: loop7: detected capacity change from 0 to 142488 Jan 24 00:37:35.231960 (sd-merge)[1617]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 24 00:37:35.233086 (sd-merge)[1617]: Merged extensions into '/usr'. Jan 24 00:37:35.237275 systemd[1]: Reloading requested from client PID 1585 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:37:35.237411 systemd[1]: Reloading... Jan 24 00:37:35.323157 zram_generator::config[1642]: No configuration found. Jan 24 00:37:35.470394 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:37:35.523881 systemd[1]: Reloading finished in 285 ms. Jan 24 00:37:35.546074 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:37:35.549056 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:37:35.557553 systemd[1]: Starting ensure-sysext.service... Jan 24 00:37:35.559663 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:37:35.563284 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:37:35.573345 systemd[1]: Reloading requested from client PID 1695 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:37:35.573365 systemd[1]: Reloading... Jan 24 00:37:35.603341 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:37:35.603702 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 24 00:37:35.604604 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 24 00:37:35.607679 systemd-udevd[1697]: Using default interface naming scheme 'v255'. Jan 24 00:37:35.608340 systemd-tmpfiles[1696]: ACLs are not supported, ignoring. Jan 24 00:37:35.608488 systemd-tmpfiles[1696]: ACLs are not supported, ignoring. Jan 24 00:37:35.614190 systemd-tmpfiles[1696]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:37:35.614317 systemd-tmpfiles[1696]: Skipping /boot Jan 24 00:37:35.627026 systemd-tmpfiles[1696]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:37:35.627156 systemd-tmpfiles[1696]: Skipping /boot Jan 24 00:37:35.668196 zram_generator::config[1723]: No configuration found. Jan 24 00:37:35.770968 (udev-worker)[1745]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:37:35.850122 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 24 00:37:35.856125 kernel: ACPI: button: Power Button [PWRF] Jan 24 00:37:35.877050 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:37:35.880406 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Jan 24 00:37:35.883517 kernel: ACPI: button: Sleep Button [SLPF] Jan 24 00:37:35.894127 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 24 00:37:35.926148 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Jan 24 00:37:35.936148 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1755) Jan 24 00:37:36.031383 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 00:37:36.031941 systemd[1]: Reloading finished in 458 ms. Jan 24 00:37:36.033359 ldconfig[1580]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:37:36.064798 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:37:36.067023 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:37:36.069309 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:37:36.125399 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:37:36.197745 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:37:36.206230 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:37:36.213230 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:37:36.214253 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:37:36.216992 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:37:36.226457 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:37:36.236297 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:37:36.238440 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:37:36.247556 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:37:36.258494 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:37:36.263550 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:37:36.269480 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:37:36.271810 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:37:36.272479 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:37:36.277778 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 24 00:37:36.279049 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:37:36.279264 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:37:36.280504 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:37:36.280684 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:37:36.282895 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:37:36.283094 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:37:36.293469 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 24 00:37:36.312676 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:37:36.313085 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:37:36.317461 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 24 00:37:36.322244 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:37:36.326545 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:37:36.330448 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:37:36.341467 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:37:36.342375 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:37:36.349263 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:37:36.350051 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:37:36.360579 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:37:36.361565 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:37:36.363900 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:37:36.365184 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:37:36.367611 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:37:36.367803 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:37:36.379857 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:37:36.384726 systemd[1]: Finished ensure-sysext.service. Jan 24 00:37:36.385772 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:37:36.394903 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:37:36.395170 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:37:36.397682 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:37:36.398830 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:37:36.410605 augenrules[1925]: No rules Jan 24 00:37:36.412908 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:37:36.418298 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:37:36.419966 lvm[1914]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:37:36.436473 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:37:36.443255 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:37:36.455329 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:37:36.462220 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 24 00:37:36.464594 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:37:36.474355 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 24 00:37:36.475047 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:37:36.497695 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:37:36.500437 lvm[1944]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:37:36.534216 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:37:36.536288 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:37:36.546016 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 24 00:37:36.560436 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:37:36.592307 systemd-networkd[1900]: lo: Link UP Jan 24 00:37:36.592662 systemd-networkd[1900]: lo: Gained carrier Jan 24 00:37:36.594563 systemd-networkd[1900]: Enumeration completed Jan 24 00:37:36.594822 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:37:36.596343 systemd-networkd[1900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:37:36.597272 systemd-networkd[1900]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:37:36.602526 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:37:36.604264 systemd-networkd[1900]: eth0: Link UP Jan 24 00:37:36.605789 systemd-networkd[1900]: eth0: Gained carrier Jan 24 00:37:36.605821 systemd-networkd[1900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:37:36.615191 systemd-networkd[1900]: eth0: DHCPv4 address 172.31.23.37/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 24 00:37:36.616471 systemd-resolved[1902]: Positive Trust Anchors: Jan 24 00:37:36.616492 systemd-resolved[1902]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:37:36.616552 systemd-resolved[1902]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:37:36.635520 systemd-resolved[1902]: Defaulting to hostname 'linux'. Jan 24 00:37:36.637476 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:37:36.638150 systemd[1]: Reached target network.target - Network. Jan 24 00:37:36.638665 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:37:36.639133 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:37:36.639679 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:37:36.640207 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:37:36.640846 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:37:36.641387 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:37:36.641790 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:37:36.642228 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:37:36.642327 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:37:36.642710 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:37:36.644498 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:37:36.646447 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:37:36.655543 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:37:36.656915 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:37:36.657530 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:37:36.657916 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:37:36.658397 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:37:36.658440 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:37:36.659652 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:37:36.664299 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 24 00:37:36.670391 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:37:36.674279 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:37:36.678310 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:37:36.678948 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:37:36.686450 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:37:36.690316 systemd[1]: Started ntpd.service - Network Time Service. Jan 24 00:37:36.694245 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 00:37:36.697293 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 24 00:37:36.701358 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:37:36.706294 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:37:36.721418 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:37:36.723637 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:37:36.724349 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:37:36.732192 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:37:36.742497 jq[1960]: false Jan 24 00:37:36.745470 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:37:36.751620 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:37:36.753240 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:37:36.755853 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:37:36.756150 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:37:36.812302 extend-filesystems[1961]: Found loop4 Jan 24 00:37:36.812302 extend-filesystems[1961]: Found loop5 Jan 24 00:37:36.812302 extend-filesystems[1961]: Found loop6 Jan 24 00:37:36.812302 extend-filesystems[1961]: Found loop7 Jan 24 00:37:36.812302 extend-filesystems[1961]: Found nvme0n1 Jan 24 00:37:36.812302 extend-filesystems[1961]: Found nvme0n1p1 Jan 24 00:37:36.812302 extend-filesystems[1961]: Found nvme0n1p2 Jan 24 00:37:36.812302 extend-filesystems[1961]: Found nvme0n1p3 Jan 24 00:37:36.812302 extend-filesystems[1961]: Found usr Jan 24 00:37:36.812302 extend-filesystems[1961]: Found nvme0n1p4 Jan 24 00:37:36.812302 extend-filesystems[1961]: Found nvme0n1p6 Jan 24 00:37:36.812302 extend-filesystems[1961]: Found nvme0n1p7 Jan 24 00:37:36.812302 extend-filesystems[1961]: Found nvme0n1p9 Jan 24 00:37:36.812302 extend-filesystems[1961]: Checking size of /dev/nvme0n1p9 Jan 24 00:37:36.832549 dbus-daemon[1959]: [system] SELinux support is enabled Jan 24 00:37:36.913084 extend-filesystems[1961]: Resized partition /dev/nvme0n1p9 Jan 24 00:37:36.928247 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 24 00:37:36.928291 jq[1970]: true Jan 24 00:37:36.833050 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:37:36.928535 coreos-metadata[1958]: Jan 24 00:37:36.822 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 24 00:37:36.928535 coreos-metadata[1958]: Jan 24 00:37:36.826 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 24 00:37:36.928535 coreos-metadata[1958]: Jan 24 00:37:36.826 INFO Fetch successful Jan 24 00:37:36.928535 coreos-metadata[1958]: Jan 24 00:37:36.826 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 24 00:37:36.928535 coreos-metadata[1958]: Jan 24 00:37:36.827 INFO Fetch successful Jan 24 00:37:36.928535 coreos-metadata[1958]: Jan 24 00:37:36.827 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 24 00:37:36.928535 coreos-metadata[1958]: Jan 24 00:37:36.827 INFO Fetch successful Jan 24 00:37:36.928535 coreos-metadata[1958]: Jan 24 00:37:36.828 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 24 00:37:36.928535 coreos-metadata[1958]: Jan 24 00:37:36.828 INFO Fetch successful Jan 24 00:37:36.928535 coreos-metadata[1958]: Jan 24 00:37:36.828 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 24 00:37:36.928535 coreos-metadata[1958]: Jan 24 00:37:36.830 INFO Fetch failed with 404: resource not found Jan 24 00:37:36.928535 coreos-metadata[1958]: Jan 24 00:37:36.830 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 24 00:37:36.928535 coreos-metadata[1958]: Jan 24 00:37:36.838 INFO Fetch successful Jan 24 00:37:36.928535 coreos-metadata[1958]: Jan 24 00:37:36.838 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 24 00:37:36.928535 coreos-metadata[1958]: Jan 24 00:37:36.851 INFO Fetch successful Jan 24 00:37:36.928535 coreos-metadata[1958]: Jan 24 00:37:36.851 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 24 00:37:36.928535 coreos-metadata[1958]: Jan 24 00:37:36.859 INFO Fetch successful Jan 24 00:37:36.928535 coreos-metadata[1958]: Jan 24 00:37:36.859 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 24 00:37:36.928535 coreos-metadata[1958]: Jan 24 00:37:36.861 INFO Fetch successful Jan 24 00:37:36.928535 coreos-metadata[1958]: Jan 24 00:37:36.861 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 24 00:37:36.928535 coreos-metadata[1958]: Jan 24 00:37:36.864 INFO Fetch successful Jan 24 00:37:36.853292 dbus-daemon[1959]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1900 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 24 00:37:36.929476 extend-filesystems[1998]: resize2fs 1.47.1 (20-May-2024) Jan 24 00:37:36.839004 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:37:36.874242 dbus-daemon[1959]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 24 00:37:36.839042 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:37:36.949890 tar[1980]: linux-amd64/LICENSE Jan 24 00:37:36.949890 tar[1980]: linux-amd64/helm Jan 24 00:37:36.839672 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:37:36.839695 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:37:36.960881 ntpd[1963]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 22:00:38 UTC 2026 (1): Starting Jan 24 00:37:36.971978 ntpd[1963]: 24 Jan 00:37:36 ntpd[1963]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 22:00:38 UTC 2026 (1): Starting Jan 24 00:37:36.971978 ntpd[1963]: 24 Jan 00:37:36 ntpd[1963]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 24 00:37:36.971978 ntpd[1963]: 24 Jan 00:37:36 ntpd[1963]: ---------------------------------------------------- Jan 24 00:37:36.971978 ntpd[1963]: 24 Jan 00:37:36 ntpd[1963]: ntp-4 is maintained by Network Time Foundation, Jan 24 00:37:36.971978 ntpd[1963]: 24 Jan 00:37:36 ntpd[1963]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 24 00:37:36.971978 ntpd[1963]: 24 Jan 00:37:36 ntpd[1963]: corporation. Support and training for ntp-4 are Jan 24 00:37:36.971978 ntpd[1963]: 24 Jan 00:37:36 ntpd[1963]: available at https://www.nwtime.org/support Jan 24 00:37:36.971978 ntpd[1963]: 24 Jan 00:37:36 ntpd[1963]: ---------------------------------------------------- Jan 24 00:37:36.971978 ntpd[1963]: 24 Jan 00:37:36 ntpd[1963]: proto: precision = 0.087 usec (-23) Jan 24 00:37:36.971978 ntpd[1963]: 24 Jan 00:37:36 ntpd[1963]: basedate set to 2026-01-11 Jan 24 00:37:36.971978 ntpd[1963]: 24 Jan 00:37:36 ntpd[1963]: gps base set to 2026-01-11 (week 2401) Jan 24 00:37:36.971978 ntpd[1963]: 24 Jan 00:37:36 ntpd[1963]: Listen and drop on 0 v6wildcard [::]:123 Jan 24 00:37:36.971978 ntpd[1963]: 24 Jan 00:37:36 ntpd[1963]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 24 00:37:36.976435 update_engine[1969]: I20260124 00:37:36.960124 1969 main.cc:92] Flatcar Update Engine starting Jan 24 00:37:36.894263 (ntainerd)[1988]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 24 00:37:36.960907 ntpd[1963]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 24 00:37:37.001365 ntpd[1963]: 24 Jan 00:37:36 ntpd[1963]: Listen normally on 2 lo 127.0.0.1:123 Jan 24 00:37:37.001365 ntpd[1963]: 24 Jan 00:37:36 ntpd[1963]: Listen normally on 3 eth0 172.31.23.37:123 Jan 24 00:37:37.001365 ntpd[1963]: 24 Jan 00:37:36 ntpd[1963]: Listen normally on 4 lo [::1]:123 Jan 24 00:37:37.001365 ntpd[1963]: 24 Jan 00:37:36 ntpd[1963]: bind(21) AF_INET6 fe80::43d:95ff:fe3a:303d%2#123 flags 0x11 failed: Cannot assign requested address Jan 24 00:37:37.001365 ntpd[1963]: 24 Jan 00:37:36 ntpd[1963]: unable to create socket on eth0 (5) for fe80::43d:95ff:fe3a:303d%2#123 Jan 24 00:37:37.001365 ntpd[1963]: 24 Jan 00:37:36 ntpd[1963]: failed to init interface for address fe80::43d:95ff:fe3a:303d%2 Jan 24 00:37:37.001365 ntpd[1963]: 24 Jan 00:37:36 ntpd[1963]: Listening on routing socket on fd #21 for interface updates Jan 24 00:37:37.001610 jq[1989]: true Jan 24 00:37:36.894711 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 24 00:37:37.006463 update_engine[1969]: I20260124 00:37:36.992658 1969 update_check_scheduler.cc:74] Next update check in 4m10s Jan 24 00:37:36.960920 ntpd[1963]: ---------------------------------------------------- Jan 24 00:37:37.013082 ntpd[1963]: 24 Jan 00:37:37 ntpd[1963]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 24 00:37:37.013082 ntpd[1963]: 24 Jan 00:37:37 ntpd[1963]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 24 00:37:36.913166 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:37:36.960930 ntpd[1963]: ntp-4 is maintained by Network Time Foundation, Jan 24 00:37:36.913399 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:37:36.960940 ntpd[1963]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 24 00:37:36.987197 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:37:36.960949 ntpd[1963]: corporation. Support and training for ntp-4 are Jan 24 00:37:37.000424 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:37:36.960959 ntpd[1963]: available at https://www.nwtime.org/support Jan 24 00:37:36.960970 ntpd[1963]: ---------------------------------------------------- Jan 24 00:37:36.963878 ntpd[1963]: proto: precision = 0.087 usec (-23) Jan 24 00:37:36.965481 ntpd[1963]: basedate set to 2026-01-11 Jan 24 00:37:36.965501 ntpd[1963]: gps base set to 2026-01-11 (week 2401) Jan 24 00:37:36.969505 ntpd[1963]: Listen and drop on 0 v6wildcard [::]:123 Jan 24 00:37:36.969561 ntpd[1963]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 24 00:37:36.974325 ntpd[1963]: Listen normally on 2 lo 127.0.0.1:123 Jan 24 00:37:36.974373 ntpd[1963]: Listen normally on 3 eth0 172.31.23.37:123 Jan 24 00:37:36.974417 ntpd[1963]: Listen normally on 4 lo [::1]:123 Jan 24 00:37:36.974467 ntpd[1963]: bind(21) AF_INET6 fe80::43d:95ff:fe3a:303d%2#123 flags 0x11 failed: Cannot assign requested address Jan 24 00:37:36.974493 ntpd[1963]: unable to create socket on eth0 (5) for fe80::43d:95ff:fe3a:303d%2#123 Jan 24 00:37:36.974507 ntpd[1963]: failed to init interface for address fe80::43d:95ff:fe3a:303d%2 Jan 24 00:37:36.974542 ntpd[1963]: Listening on routing socket on fd #21 for interface updates Jan 24 00:37:37.002823 ntpd[1963]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 24 00:37:37.002854 ntpd[1963]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 24 00:37:37.065363 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 24 00:37:37.066190 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:37:37.086187 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 24 00:37:37.158186 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1746) Jan 24 00:37:37.218273 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 24 00:37:37.239064 extend-filesystems[1998]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 24 00:37:37.239064 extend-filesystems[1998]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 24 00:37:37.239064 extend-filesystems[1998]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 24 00:37:37.238258 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:37:37.256330 bash[2052]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:37:37.256572 extend-filesystems[1961]: Resized filesystem in /dev/nvme0n1p9 Jan 24 00:37:37.238543 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:37:37.260318 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:37:37.278136 systemd[1]: Starting sshkeys.service... Jan 24 00:37:37.322833 locksmithd[2010]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:37:37.333782 systemd-logind[1968]: Watching system buttons on /dev/input/event1 (Power Button) Jan 24 00:37:37.334297 systemd-logind[1968]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 24 00:37:37.334329 systemd-logind[1968]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:37:37.336270 systemd-logind[1968]: New seat seat0. Jan 24 00:37:37.351476 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:37:37.364749 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 24 00:37:37.373598 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 24 00:37:37.474715 dbus-daemon[1959]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 24 00:37:37.475191 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 24 00:37:37.489912 dbus-daemon[1959]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1999 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 24 00:37:37.500494 systemd[1]: Starting polkit.service - Authorization Manager... Jan 24 00:37:37.535049 polkitd[2128]: Started polkitd version 121 Jan 24 00:37:37.568317 polkitd[2128]: Loading rules from directory /etc/polkit-1/rules.d Jan 24 00:37:37.568413 polkitd[2128]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 24 00:37:37.571458 polkitd[2128]: Finished loading, compiling and executing 2 rules Jan 24 00:37:37.575564 dbus-daemon[1959]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 24 00:37:37.575761 systemd[1]: Started polkit.service - Authorization Manager. Jan 24 00:37:37.577377 polkitd[2128]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 24 00:37:37.624837 systemd-resolved[1902]: System hostname changed to 'ip-172-31-23-37'. Jan 24 00:37:37.625258 systemd-networkd[1900]: eth0: Gained IPv6LL Jan 24 00:37:37.627261 systemd-hostnamed[1999]: Hostname set to (transient) Jan 24 00:37:37.630185 coreos-metadata[2100]: Jan 24 00:37:37.630 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 24 00:37:37.640154 coreos-metadata[2100]: Jan 24 00:37:37.631 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 24 00:37:37.640154 coreos-metadata[2100]: Jan 24 00:37:37.632 INFO Fetch successful Jan 24 00:37:37.640154 coreos-metadata[2100]: Jan 24 00:37:37.632 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 24 00:37:37.640154 coreos-metadata[2100]: Jan 24 00:37:37.632 INFO Fetch successful Jan 24 00:37:37.637653 unknown[2100]: wrote ssh authorized keys file for user: core Jan 24 00:37:37.641625 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:37:37.647350 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:37:37.659179 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 24 00:37:37.673487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:37:37.684762 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:37:37.734605 containerd[1988]: time="2026-01-24T00:37:37.734402878Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 24 00:37:37.745128 update-ssh-keys[2160]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:37:37.747021 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 24 00:37:37.751620 systemd[1]: Finished sshkeys.service. Jan 24 00:37:37.815236 containerd[1988]: time="2026-01-24T00:37:37.814992623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:37:37.836441 containerd[1988]: time="2026-01-24T00:37:37.836313162Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:37:37.836441 containerd[1988]: time="2026-01-24T00:37:37.836378703Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 24 00:37:37.836441 containerd[1988]: time="2026-01-24T00:37:37.836409467Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 24 00:37:37.836630 containerd[1988]: time="2026-01-24T00:37:37.836597583Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 24 00:37:37.836669 containerd[1988]: time="2026-01-24T00:37:37.836629084Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 24 00:37:37.836774 containerd[1988]: time="2026-01-24T00:37:37.836732333Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:37:37.836774 containerd[1988]: time="2026-01-24T00:37:37.836760531Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:37:37.837903 containerd[1988]: time="2026-01-24T00:37:37.837017139Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:37:37.837903 containerd[1988]: time="2026-01-24T00:37:37.837050768Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 24 00:37:37.837903 containerd[1988]: time="2026-01-24T00:37:37.837088331Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:37:37.837903 containerd[1988]: time="2026-01-24T00:37:37.837149293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 24 00:37:37.837903 containerd[1988]: time="2026-01-24T00:37:37.837261338Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:37:37.837903 containerd[1988]: time="2026-01-24T00:37:37.837529676Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:37:37.837903 containerd[1988]: time="2026-01-24T00:37:37.837711069Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:37:37.837903 containerd[1988]: time="2026-01-24T00:37:37.837740569Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 24 00:37:37.837903 containerd[1988]: time="2026-01-24T00:37:37.837842294Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 24 00:37:37.837903 containerd[1988]: time="2026-01-24T00:37:37.837906386Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:37:37.842583 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:37:37.861407 containerd[1988]: time="2026-01-24T00:37:37.860166985Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 24 00:37:37.861407 containerd[1988]: time="2026-01-24T00:37:37.860255859Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 24 00:37:37.861407 containerd[1988]: time="2026-01-24T00:37:37.860280313Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 24 00:37:37.861407 containerd[1988]: time="2026-01-24T00:37:37.860303235Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 24 00:37:37.861407 containerd[1988]: time="2026-01-24T00:37:37.860326452Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 24 00:37:37.861407 containerd[1988]: time="2026-01-24T00:37:37.860529813Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 24 00:37:37.861407 containerd[1988]: time="2026-01-24T00:37:37.860926331Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 24 00:37:37.861407 containerd[1988]: time="2026-01-24T00:37:37.861090185Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 24 00:37:37.861407 containerd[1988]: time="2026-01-24T00:37:37.861130285Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 24 00:37:37.861407 containerd[1988]: time="2026-01-24T00:37:37.861155557Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 24 00:37:37.861407 containerd[1988]: time="2026-01-24T00:37:37.861179292Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 24 00:37:37.861407 containerd[1988]: time="2026-01-24T00:37:37.861199622Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 24 00:37:37.861407 containerd[1988]: time="2026-01-24T00:37:37.861218684Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 24 00:37:37.861407 containerd[1988]: time="2026-01-24T00:37:37.861241341Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 24 00:37:37.861986 containerd[1988]: time="2026-01-24T00:37:37.861262371Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 24 00:37:37.861986 containerd[1988]: time="2026-01-24T00:37:37.861282148Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 24 00:37:37.861986 containerd[1988]: time="2026-01-24T00:37:37.861301331Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 24 00:37:37.861986 containerd[1988]: time="2026-01-24T00:37:37.861320466Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 24 00:37:37.861986 containerd[1988]: time="2026-01-24T00:37:37.861349648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 24 00:37:37.861986 containerd[1988]: time="2026-01-24T00:37:37.861368229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 24 00:37:37.861986 containerd[1988]: time="2026-01-24T00:37:37.861385012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 24 00:37:37.861986 containerd[1988]: time="2026-01-24T00:37:37.861403885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 24 00:37:37.861986 containerd[1988]: time="2026-01-24T00:37:37.861419169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 24 00:37:37.861986 containerd[1988]: time="2026-01-24T00:37:37.861448758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 24 00:37:37.861986 containerd[1988]: time="2026-01-24T00:37:37.861470167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 24 00:37:37.861986 containerd[1988]: time="2026-01-24T00:37:37.861492717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 24 00:37:37.861986 containerd[1988]: time="2026-01-24T00:37:37.861509854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 24 00:37:37.861986 containerd[1988]: time="2026-01-24T00:37:37.861531354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 24 00:37:37.862514 containerd[1988]: time="2026-01-24T00:37:37.861553725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 24 00:37:37.862514 containerd[1988]: time="2026-01-24T00:37:37.861572117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 24 00:37:37.862514 containerd[1988]: time="2026-01-24T00:37:37.861592611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 24 00:37:37.862514 containerd[1988]: time="2026-01-24T00:37:37.861616113Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 24 00:37:37.862514 containerd[1988]: time="2026-01-24T00:37:37.861646942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 24 00:37:37.862514 containerd[1988]: time="2026-01-24T00:37:37.861663808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 24 00:37:37.862514 containerd[1988]: time="2026-01-24T00:37:37.861681503Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 24 00:37:37.864799 containerd[1988]: time="2026-01-24T00:37:37.863590294Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 24 00:37:37.864799 containerd[1988]: time="2026-01-24T00:37:37.863656944Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 24 00:37:37.864799 containerd[1988]: time="2026-01-24T00:37:37.863680611Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 24 00:37:37.864799 containerd[1988]: time="2026-01-24T00:37:37.863717085Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 24 00:37:37.864799 containerd[1988]: time="2026-01-24T00:37:37.863734028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 24 00:37:37.864799 containerd[1988]: time="2026-01-24T00:37:37.863759115Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 24 00:37:37.864799 containerd[1988]: time="2026-01-24T00:37:37.863796370Z" level=info msg="NRI interface is disabled by configuration." Jan 24 00:37:37.864799 containerd[1988]: time="2026-01-24T00:37:37.863813809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 24 00:37:37.866447 containerd[1988]: time="2026-01-24T00:37:37.866286130Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 24 00:37:37.866447 containerd[1988]: time="2026-01-24T00:37:37.866397921Z" level=info msg="Connect containerd service" Jan 24 00:37:37.866729 containerd[1988]: time="2026-01-24T00:37:37.866474779Z" level=info msg="using legacy CRI server" Jan 24 00:37:37.866729 containerd[1988]: time="2026-01-24T00:37:37.866487029Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:37:37.866729 containerd[1988]: time="2026-01-24T00:37:37.866655685Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 24 00:37:37.876715 containerd[1988]: time="2026-01-24T00:37:37.875821080Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:37:37.876715 containerd[1988]: time="2026-01-24T00:37:37.876011873Z" level=info msg="Start subscribing containerd event" Jan 24 00:37:37.876715 containerd[1988]: time="2026-01-24T00:37:37.876088608Z" level=info msg="Start recovering state" Jan 24 00:37:37.876715 containerd[1988]: time="2026-01-24T00:37:37.876214150Z" level=info msg="Start event monitor" Jan 24 00:37:37.876715 containerd[1988]: time="2026-01-24T00:37:37.876231793Z" level=info msg="Start snapshots syncer" Jan 24 00:37:37.876715 containerd[1988]: time="2026-01-24T00:37:37.876245485Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:37:37.876715 containerd[1988]: time="2026-01-24T00:37:37.876256736Z" level=info msg="Start streaming server" Jan 24 00:37:37.879180 containerd[1988]: time="2026-01-24T00:37:37.877546575Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:37:37.879180 containerd[1988]: time="2026-01-24T00:37:37.877608691Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:37:37.877785 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:37:37.882947 containerd[1988]: time="2026-01-24T00:37:37.881623828Z" level=info msg="containerd successfully booted in 0.148394s" Jan 24 00:37:37.906872 amazon-ssm-agent[2151]: Initializing new seelog logger Jan 24 00:37:37.907354 amazon-ssm-agent[2151]: New Seelog Logger Creation Complete Jan 24 00:37:37.908766 amazon-ssm-agent[2151]: 2026/01/24 00:37:37 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:37:37.908766 amazon-ssm-agent[2151]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:37:37.911222 amazon-ssm-agent[2151]: 2026/01/24 00:37:37 processing appconfig overrides Jan 24 00:37:37.911997 amazon-ssm-agent[2151]: 2026-01-24 00:37:37 INFO Proxy environment variables: Jan 24 00:37:37.914950 amazon-ssm-agent[2151]: 2026/01/24 00:37:37 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:37:37.914950 amazon-ssm-agent[2151]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:37:37.914950 amazon-ssm-agent[2151]: 2026/01/24 00:37:37 processing appconfig overrides Jan 24 00:37:37.914950 amazon-ssm-agent[2151]: 2026/01/24 00:37:37 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:37:37.914950 amazon-ssm-agent[2151]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:37:37.914950 amazon-ssm-agent[2151]: 2026/01/24 00:37:37 processing appconfig overrides Jan 24 00:37:37.920654 amazon-ssm-agent[2151]: 2026/01/24 00:37:37 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:37:37.920654 amazon-ssm-agent[2151]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:37:37.920654 amazon-ssm-agent[2151]: 2026/01/24 00:37:37 processing appconfig overrides Jan 24 00:37:38.019763 amazon-ssm-agent[2151]: 2026-01-24 00:37:37 INFO https_proxy: Jan 24 00:37:38.118662 amazon-ssm-agent[2151]: 2026-01-24 00:37:37 INFO http_proxy: Jan 24 00:37:38.218072 amazon-ssm-agent[2151]: 2026-01-24 00:37:37 INFO no_proxy: Jan 24 00:37:38.317093 amazon-ssm-agent[2151]: 2026-01-24 00:37:37 INFO Checking if agent identity type OnPrem can be assumed Jan 24 00:37:38.415236 amazon-ssm-agent[2151]: 2026-01-24 00:37:37 INFO Checking if agent identity type EC2 can be assumed Jan 24 00:37:38.515117 amazon-ssm-agent[2151]: 2026-01-24 00:37:38 INFO Agent will take identity from EC2 Jan 24 00:37:38.544392 tar[1980]: linux-amd64/README.md Jan 24 00:37:38.560229 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 00:37:38.604403 sshd_keygen[1997]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:37:38.613279 amazon-ssm-agent[2151]: 2026-01-24 00:37:38 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 24 00:37:38.632798 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:37:38.638545 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:37:38.648424 amazon-ssm-agent[2151]: 2026-01-24 00:37:38 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 24 00:37:38.648424 amazon-ssm-agent[2151]: 2026-01-24 00:37:38 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 24 00:37:38.648424 amazon-ssm-agent[2151]: 2026-01-24 00:37:38 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 24 00:37:38.648424 amazon-ssm-agent[2151]: 2026-01-24 00:37:38 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 24 00:37:38.648424 amazon-ssm-agent[2151]: 2026-01-24 00:37:38 INFO [amazon-ssm-agent] Starting Core Agent Jan 24 00:37:38.648424 amazon-ssm-agent[2151]: 2026-01-24 00:37:38 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 24 00:37:38.648424 amazon-ssm-agent[2151]: 2026-01-24 00:37:38 INFO [Registrar] Starting registrar module Jan 24 00:37:38.648424 amazon-ssm-agent[2151]: 2026-01-24 00:37:38 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 24 00:37:38.648424 amazon-ssm-agent[2151]: 2026-01-24 00:37:38 INFO [EC2Identity] EC2 registration was successful. Jan 24 00:37:38.648424 amazon-ssm-agent[2151]: 2026-01-24 00:37:38 INFO [CredentialRefresher] credentialRefresher has started Jan 24 00:37:38.648424 amazon-ssm-agent[2151]: 2026-01-24 00:37:38 INFO [CredentialRefresher] Starting credentials refresher loop Jan 24 00:37:38.648424 amazon-ssm-agent[2151]: 2026-01-24 00:37:38 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 24 00:37:38.648860 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:37:38.650467 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:37:38.659486 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:37:38.671731 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:37:38.680309 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:37:38.683032 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:37:38.683979 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:37:38.712432 amazon-ssm-agent[2151]: 2026-01-24 00:37:38 INFO [CredentialRefresher] Next credential rotation will be in 32.016660295866664 minutes Jan 24 00:37:39.664306 amazon-ssm-agent[2151]: 2026-01-24 00:37:39 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 24 00:37:39.765569 amazon-ssm-agent[2151]: 2026-01-24 00:37:39 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2204) started Jan 24 00:37:39.865820 amazon-ssm-agent[2151]: 2026-01-24 00:37:39 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 24 00:37:39.961382 ntpd[1963]: Listen normally on 6 eth0 [fe80::43d:95ff:fe3a:303d%2]:123 Jan 24 00:37:39.961743 ntpd[1963]: 24 Jan 00:37:39 ntpd[1963]: Listen normally on 6 eth0 [fe80::43d:95ff:fe3a:303d%2]:123 Jan 24 00:37:40.186317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:37:40.187332 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:37:40.189245 systemd[1]: Startup finished in 637ms (kernel) + 7.321s (initrd) + 7.206s (userspace) = 15.166s. Jan 24 00:37:40.192379 (kubelet)[2220]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:37:41.238064 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:37:41.244471 systemd[1]: Started sshd@0-172.31.23.37:22-4.153.228.146:45962.service - OpenSSH per-connection server daemon (4.153.228.146:45962). Jan 24 00:37:41.307964 kubelet[2220]: E0124 00:37:41.307836 2220 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:37:41.310294 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:37:41.310448 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:37:41.310722 systemd[1]: kubelet.service: Consumed 1.089s CPU time. Jan 24 00:37:41.756032 sshd[2230]: Accepted publickey for core from 4.153.228.146 port 45962 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:37:41.758551 sshd[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:41.773888 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:37:41.778744 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:37:41.782578 systemd-logind[1968]: New session 1 of user core. Jan 24 00:37:41.797004 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:37:41.803488 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:37:41.809466 (systemd)[2236]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 24 00:37:41.952198 systemd[2236]: Queued start job for default target default.target. Jan 24 00:37:41.958386 systemd[2236]: Created slice app.slice - User Application Slice. Jan 24 00:37:41.958428 systemd[2236]: Reached target paths.target - Paths. Jan 24 00:37:41.958450 systemd[2236]: Reached target timers.target - Timers. Jan 24 00:37:41.959936 systemd[2236]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:37:41.977793 systemd[2236]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:37:41.977914 systemd[2236]: Reached target sockets.target - Sockets. Jan 24 00:37:41.977930 systemd[2236]: Reached target basic.target - Basic System. Jan 24 00:37:41.977975 systemd[2236]: Reached target default.target - Main User Target. Jan 24 00:37:41.978006 systemd[2236]: Startup finished in 161ms. Jan 24 00:37:41.978235 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:37:41.987377 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:37:42.353746 systemd[1]: Started sshd@1-172.31.23.37:22-4.153.228.146:45964.service - OpenSSH per-connection server daemon (4.153.228.146:45964). Jan 24 00:37:42.832359 sshd[2247]: Accepted publickey for core from 4.153.228.146 port 45964 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:37:42.833774 sshd[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:42.839355 systemd-logind[1968]: New session 2 of user core. Jan 24 00:37:42.845406 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 24 00:37:43.180654 sshd[2247]: pam_unix(sshd:session): session closed for user core Jan 24 00:37:43.184511 systemd[1]: sshd@1-172.31.23.37:22-4.153.228.146:45964.service: Deactivated successfully. Jan 24 00:37:43.186684 systemd[1]: session-2.scope: Deactivated successfully. Jan 24 00:37:43.188095 systemd-logind[1968]: Session 2 logged out. Waiting for processes to exit. Jan 24 00:37:43.189408 systemd-logind[1968]: Removed session 2. Jan 24 00:37:43.266083 systemd[1]: Started sshd@2-172.31.23.37:22-4.153.228.146:54936.service - OpenSSH per-connection server daemon (4.153.228.146:54936). Jan 24 00:37:43.747708 sshd[2254]: Accepted publickey for core from 4.153.228.146 port 54936 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:37:43.749269 sshd[2254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:43.754639 systemd-logind[1968]: New session 3 of user core. Jan 24 00:37:43.765362 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:37:45.385895 systemd-resolved[1902]: Clock change detected. Flushing caches. Jan 24 00:37:45.515777 sshd[2254]: pam_unix(sshd:session): session closed for user core Jan 24 00:37:45.518505 systemd[1]: sshd@2-172.31.23.37:22-4.153.228.146:54936.service: Deactivated successfully. Jan 24 00:37:45.520090 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 00:37:45.521226 systemd-logind[1968]: Session 3 logged out. Waiting for processes to exit. Jan 24 00:37:45.522415 systemd-logind[1968]: Removed session 3. Jan 24 00:37:45.603429 systemd[1]: Started sshd@3-172.31.23.37:22-4.153.228.146:54942.service - OpenSSH per-connection server daemon (4.153.228.146:54942). Jan 24 00:37:46.096133 sshd[2261]: Accepted publickey for core from 4.153.228.146 port 54942 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:37:46.097705 sshd[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:46.106404 systemd-logind[1968]: New session 4 of user core. Jan 24 00:37:46.115612 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:37:46.451801 sshd[2261]: pam_unix(sshd:session): session closed for user core Jan 24 00:37:46.454988 systemd[1]: sshd@3-172.31.23.37:22-4.153.228.146:54942.service: Deactivated successfully. Jan 24 00:37:46.456590 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:37:46.457832 systemd-logind[1968]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:37:46.459141 systemd-logind[1968]: Removed session 4. Jan 24 00:37:46.537180 systemd[1]: Started sshd@4-172.31.23.37:22-4.153.228.146:54952.service - OpenSSH per-connection server daemon (4.153.228.146:54952). Jan 24 00:37:47.018800 sshd[2268]: Accepted publickey for core from 4.153.228.146 port 54952 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:37:47.020204 sshd[2268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:47.025185 systemd-logind[1968]: New session 5 of user core. Jan 24 00:37:47.031619 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:37:47.333303 sudo[2271]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 00:37:47.333628 sudo[2271]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:37:47.348827 sudo[2271]: pam_unix(sudo:session): session closed for user root Jan 24 00:37:47.425621 sshd[2268]: pam_unix(sshd:session): session closed for user core Jan 24 00:37:47.429519 systemd[1]: sshd@4-172.31.23.37:22-4.153.228.146:54952.service: Deactivated successfully. Jan 24 00:37:47.431164 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:37:47.431839 systemd-logind[1968]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:37:47.432846 systemd-logind[1968]: Removed session 5. Jan 24 00:37:47.511585 systemd[1]: Started sshd@5-172.31.23.37:22-4.153.228.146:54960.service - OpenSSH per-connection server daemon (4.153.228.146:54960). Jan 24 00:37:47.995082 sshd[2276]: Accepted publickey for core from 4.153.228.146 port 54960 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:37:47.996617 sshd[2276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:48.002049 systemd-logind[1968]: New session 6 of user core. Jan 24 00:37:48.011634 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:37:48.269510 sudo[2280]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 00:37:48.269916 sudo[2280]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:37:48.274064 sudo[2280]: pam_unix(sudo:session): session closed for user root Jan 24 00:37:48.279678 sudo[2279]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 24 00:37:48.280074 sudo[2279]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:37:48.299817 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 24 00:37:48.302047 auditctl[2283]: No rules Jan 24 00:37:48.302485 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:37:48.302695 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 24 00:37:48.305586 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:37:48.336496 augenrules[2301]: No rules Jan 24 00:37:48.338174 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:37:48.339272 sudo[2279]: pam_unix(sudo:session): session closed for user root Jan 24 00:37:48.416336 sshd[2276]: pam_unix(sshd:session): session closed for user core Jan 24 00:37:48.419464 systemd[1]: sshd@5-172.31.23.37:22-4.153.228.146:54960.service: Deactivated successfully. Jan 24 00:37:48.421688 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:37:48.423203 systemd-logind[1968]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:37:48.424344 systemd-logind[1968]: Removed session 6. Jan 24 00:37:48.501403 systemd[1]: Started sshd@6-172.31.23.37:22-4.153.228.146:54962.service - OpenSSH per-connection server daemon (4.153.228.146:54962). Jan 24 00:37:48.984838 sshd[2309]: Accepted publickey for core from 4.153.228.146 port 54962 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:37:48.986474 sshd[2309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:37:48.991534 systemd-logind[1968]: New session 7 of user core. Jan 24 00:37:49.000663 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:37:49.257124 sudo[2312]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:37:49.257435 sudo[2312]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:37:49.838751 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 00:37:49.840608 (dockerd)[2328]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 00:37:50.407000 dockerd[2328]: time="2026-01-24T00:37:50.406945549Z" level=info msg="Starting up" Jan 24 00:37:50.577921 dockerd[2328]: time="2026-01-24T00:37:50.577844742Z" level=info msg="Loading containers: start." Jan 24 00:37:50.720520 kernel: Initializing XFRM netlink socket Jan 24 00:37:50.766225 (udev-worker)[2351]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:37:50.824286 systemd-networkd[1900]: docker0: Link UP Jan 24 00:37:50.846154 dockerd[2328]: time="2026-01-24T00:37:50.846108768Z" level=info msg="Loading containers: done." Jan 24 00:37:50.863781 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck410053202-merged.mount: Deactivated successfully. Jan 24 00:37:50.870557 dockerd[2328]: time="2026-01-24T00:37:50.870504894Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 00:37:50.870722 dockerd[2328]: time="2026-01-24T00:37:50.870611906Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 24 00:37:50.870754 dockerd[2328]: time="2026-01-24T00:37:50.870720237Z" level=info msg="Daemon has completed initialization" Jan 24 00:37:50.912812 dockerd[2328]: time="2026-01-24T00:37:50.912755754Z" level=info msg="API listen on /run/docker.sock" Jan 24 00:37:50.913371 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 00:37:52.220338 containerd[1988]: time="2026-01-24T00:37:52.220302218Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 24 00:37:52.771987 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 00:37:52.773601 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:37:52.790522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount431259571.mount: Deactivated successfully. Jan 24 00:37:53.298925 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:37:53.305252 (kubelet)[2484]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:37:53.358395 kubelet[2484]: E0124 00:37:53.356896 2484 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:37:53.360865 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:37:53.361180 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:37:54.576638 containerd[1988]: time="2026-01-24T00:37:54.576582635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:54.578132 containerd[1988]: time="2026-01-24T00:37:54.577853458Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070647" Jan 24 00:37:54.580018 containerd[1988]: time="2026-01-24T00:37:54.579954931Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:54.583274 containerd[1988]: time="2026-01-24T00:37:54.583238146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:54.584371 containerd[1988]: time="2026-01-24T00:37:54.584161730Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 2.363823933s" Jan 24 00:37:54.584371 containerd[1988]: time="2026-01-24T00:37:54.584197278Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 24 00:37:54.585181 containerd[1988]: time="2026-01-24T00:37:54.585153469Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 24 00:37:56.040084 containerd[1988]: time="2026-01-24T00:37:56.040014155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:56.041400 containerd[1988]: time="2026-01-24T00:37:56.041200746Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993354" Jan 24 00:37:56.042728 containerd[1988]: time="2026-01-24T00:37:56.042683159Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:56.048400 containerd[1988]: time="2026-01-24T00:37:56.048338605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:56.049502 containerd[1988]: time="2026-01-24T00:37:56.049362507Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.464173365s" Jan 24 00:37:56.049502 containerd[1988]: time="2026-01-24T00:37:56.049413427Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 24 00:37:56.050550 containerd[1988]: time="2026-01-24T00:37:56.050305820Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 24 00:37:57.287729 containerd[1988]: time="2026-01-24T00:37:57.287368093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:57.289797 containerd[1988]: time="2026-01-24T00:37:57.289704727Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405076" Jan 24 00:37:57.292318 containerd[1988]: time="2026-01-24T00:37:57.292258491Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:57.298091 containerd[1988]: time="2026-01-24T00:37:57.297143701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:57.298091 containerd[1988]: time="2026-01-24T00:37:57.297968319Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.247634414s" Jan 24 00:37:57.298091 containerd[1988]: time="2026-01-24T00:37:57.297999267Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 24 00:37:57.298754 containerd[1988]: time="2026-01-24T00:37:57.298651521Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 24 00:37:58.435760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3814048219.mount: Deactivated successfully. Jan 24 00:37:59.029559 containerd[1988]: time="2026-01-24T00:37:59.029498425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:59.030853 containerd[1988]: time="2026-01-24T00:37:59.030616564Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 24 00:37:59.032256 containerd[1988]: time="2026-01-24T00:37:59.032077331Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:59.034641 containerd[1988]: time="2026-01-24T00:37:59.034609135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:37:59.035128 containerd[1988]: time="2026-01-24T00:37:59.035095834Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.736160708s" Jan 24 00:37:59.035201 containerd[1988]: time="2026-01-24T00:37:59.035134983Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 24 00:37:59.036106 containerd[1988]: time="2026-01-24T00:37:59.036082309Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 24 00:37:59.581577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2377561650.mount: Deactivated successfully. Jan 24 00:38:00.720026 containerd[1988]: time="2026-01-24T00:38:00.719966888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:38:00.721954 containerd[1988]: time="2026-01-24T00:38:00.721739569Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 24 00:38:00.724161 containerd[1988]: time="2026-01-24T00:38:00.723743855Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:38:00.727772 containerd[1988]: time="2026-01-24T00:38:00.727730598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:38:00.729149 containerd[1988]: time="2026-01-24T00:38:00.729103998Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.692987121s" Jan 24 00:38:00.729286 containerd[1988]: time="2026-01-24T00:38:00.729265994Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 24 00:38:00.730057 containerd[1988]: time="2026-01-24T00:38:00.729870328Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 24 00:38:01.196034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3221774444.mount: Deactivated successfully. Jan 24 00:38:01.203193 containerd[1988]: time="2026-01-24T00:38:01.203121671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:38:01.204409 containerd[1988]: time="2026-01-24T00:38:01.204202386Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 24 00:38:01.209655 containerd[1988]: time="2026-01-24T00:38:01.206343442Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:38:01.212095 containerd[1988]: time="2026-01-24T00:38:01.212056395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:38:01.212861 containerd[1988]: time="2026-01-24T00:38:01.212815468Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 482.640834ms" Jan 24 00:38:01.213811 containerd[1988]: time="2026-01-24T00:38:01.212865162Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 24 00:38:01.214301 containerd[1988]: time="2026-01-24T00:38:01.214269051Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 24 00:38:01.813302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2864743280.mount: Deactivated successfully. Jan 24 00:38:03.611445 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 24 00:38:03.616630 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:38:05.132162 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:38:05.142839 (kubelet)[2672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:38:05.347905 kubelet[2672]: E0124 00:38:05.347819 2672 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:38:05.350690 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:38:05.350934 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:38:05.789967 containerd[1988]: time="2026-01-24T00:38:05.789885945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:38:05.794022 containerd[1988]: time="2026-01-24T00:38:05.793953243Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Jan 24 00:38:05.799159 containerd[1988]: time="2026-01-24T00:38:05.799077109Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:38:05.809255 containerd[1988]: time="2026-01-24T00:38:05.808689351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:38:05.810179 containerd[1988]: time="2026-01-24T00:38:05.810135901Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.595821624s" Jan 24 00:38:05.810287 containerd[1988]: time="2026-01-24T00:38:05.810185550Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 24 00:38:09.066307 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 24 00:38:09.121421 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:38:09.128788 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:38:09.166965 systemd[1]: Reloading requested from client PID 2712 ('systemctl') (unit session-7.scope)... Jan 24 00:38:09.166985 systemd[1]: Reloading... Jan 24 00:38:09.314408 zram_generator::config[2755]: No configuration found. Jan 24 00:38:09.457464 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:38:09.543399 systemd[1]: Reloading finished in 375 ms. Jan 24 00:38:09.602284 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 00:38:09.602422 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 00:38:09.602752 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:38:09.609192 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:38:09.806920 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:38:09.818838 (kubelet)[2815]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:38:09.872984 kubelet[2815]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:38:09.875346 kubelet[2815]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:38:09.875346 kubelet[2815]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:38:09.875346 kubelet[2815]: I0124 00:38:09.873431 2815 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:38:10.183052 kubelet[2815]: I0124 00:38:10.182928 2815 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:38:10.183052 kubelet[2815]: I0124 00:38:10.182960 2815 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:38:10.184854 kubelet[2815]: I0124 00:38:10.183420 2815 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:38:10.238327 kubelet[2815]: E0124 00:38:10.238266 2815 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.23.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.23.37:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:38:10.239708 kubelet[2815]: I0124 00:38:10.239662 2815 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:38:10.262455 kubelet[2815]: E0124 00:38:10.262412 2815 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:38:10.262636 kubelet[2815]: I0124 00:38:10.262615 2815 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:38:10.267708 kubelet[2815]: I0124 00:38:10.267671 2815 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:38:10.272714 kubelet[2815]: I0124 00:38:10.272635 2815 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:38:10.273073 kubelet[2815]: I0124 00:38:10.272712 2815 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-37","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:38:10.273251 kubelet[2815]: I0124 00:38:10.273080 2815 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:38:10.273251 kubelet[2815]: I0124 00:38:10.273098 2815 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:38:10.275168 kubelet[2815]: I0124 00:38:10.275113 2815 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:38:10.283287 kubelet[2815]: I0124 00:38:10.283212 2815 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:38:10.283287 kubelet[2815]: I0124 00:38:10.283269 2815 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:38:10.283287 kubelet[2815]: I0124 00:38:10.283298 2815 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:38:10.283673 kubelet[2815]: I0124 00:38:10.283309 2815 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:38:10.291789 kubelet[2815]: W0124 00:38:10.291728 2815 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-37&limit=500&resourceVersion=0": dial tcp 172.31.23.37:6443: connect: connection refused Jan 24 00:38:10.291946 kubelet[2815]: E0124 00:38:10.291809 2815 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.23.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-37&limit=500&resourceVersion=0\": dial tcp 172.31.23.37:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:38:10.292452 kubelet[2815]: W0124 00:38:10.292348 2815 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.23.37:6443: connect: connection refused Jan 24 00:38:10.292452 kubelet[2815]: E0124 00:38:10.292434 2815 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.23.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.37:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:38:10.294593 kubelet[2815]: I0124 00:38:10.294558 2815 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:38:10.299940 kubelet[2815]: I0124 00:38:10.299777 2815 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:38:10.302458 kubelet[2815]: W0124 00:38:10.302426 2815 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:38:10.303084 kubelet[2815]: I0124 00:38:10.303053 2815 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:38:10.303164 kubelet[2815]: I0124 00:38:10.303092 2815 server.go:1287] "Started kubelet" Jan 24 00:38:10.314251 kubelet[2815]: I0124 00:38:10.314091 2815 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:38:10.317473 kubelet[2815]: E0124 00:38:10.314602 2815 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.37:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.37:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-37.188d83c03cc672c6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-37,UID:ip-172-31-23-37,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-37,},FirstTimestamp:2026-01-24 00:38:10.303070918 +0000 UTC m=+0.479975744,LastTimestamp:2026-01-24 00:38:10.303070918 +0000 UTC m=+0.479975744,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-37,}" Jan 24 00:38:10.317473 kubelet[2815]: I0124 00:38:10.316685 2815 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:38:10.318037 kubelet[2815]: I0124 00:38:10.318016 2815 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:38:10.318896 kubelet[2815]: I0124 00:38:10.318850 2815 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:38:10.319078 kubelet[2815]: I0124 00:38:10.319062 2815 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:38:10.320534 kubelet[2815]: I0124 00:38:10.320513 2815 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:38:10.323797 kubelet[2815]: I0124 00:38:10.323450 2815 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:38:10.323797 kubelet[2815]: E0124 00:38:10.323655 2815 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-37\" not found" Jan 24 00:38:10.324401 kubelet[2815]: E0124 00:38:10.324127 2815 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-37?timeout=10s\": dial tcp 172.31.23.37:6443: connect: connection refused" interval="200ms" Jan 24 00:38:10.326697 kubelet[2815]: I0124 00:38:10.326680 2815 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:38:10.327234 kubelet[2815]: I0124 00:38:10.326804 2815 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:38:10.327234 kubelet[2815]: W0124 00:38:10.327150 2815 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.37:6443: connect: connection refused Jan 24 00:38:10.327234 kubelet[2815]: E0124 00:38:10.327193 2815 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.23.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.37:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:38:10.327774 kubelet[2815]: I0124 00:38:10.327759 2815 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:38:10.327922 kubelet[2815]: I0124 00:38:10.327909 2815 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:38:10.329571 kubelet[2815]: E0124 00:38:10.329551 2815 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:38:10.330104 kubelet[2815]: I0124 00:38:10.329648 2815 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:38:10.336738 kubelet[2815]: I0124 00:38:10.335562 2815 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:38:10.336738 kubelet[2815]: I0124 00:38:10.336621 2815 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:38:10.336738 kubelet[2815]: I0124 00:38:10.336639 2815 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:38:10.336738 kubelet[2815]: I0124 00:38:10.336656 2815 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:38:10.336738 kubelet[2815]: I0124 00:38:10.336665 2815 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:38:10.336738 kubelet[2815]: E0124 00:38:10.336709 2815 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:38:10.347357 kubelet[2815]: W0124 00:38:10.347121 2815 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.37:6443: connect: connection refused Jan 24 00:38:10.347357 kubelet[2815]: E0124 00:38:10.347191 2815 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.23.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.37:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:38:10.357948 kubelet[2815]: I0124 00:38:10.357923 2815 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:38:10.358059 kubelet[2815]: I0124 00:38:10.357978 2815 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:38:10.358059 kubelet[2815]: I0124 00:38:10.357998 2815 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:38:10.361686 kubelet[2815]: I0124 00:38:10.361647 2815 policy_none.go:49] "None policy: Start" Jan 24 00:38:10.361686 kubelet[2815]: I0124 00:38:10.361677 2815 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:38:10.361686 kubelet[2815]: I0124 00:38:10.361693 2815 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:38:10.369554 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 00:38:10.379955 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 00:38:10.383611 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 00:38:10.393543 kubelet[2815]: I0124 00:38:10.393515 2815 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:38:10.393867 kubelet[2815]: I0124 00:38:10.393856 2815 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:38:10.393969 kubelet[2815]: I0124 00:38:10.393941 2815 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:38:10.394207 kubelet[2815]: I0124 00:38:10.394189 2815 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:38:10.395574 kubelet[2815]: E0124 00:38:10.395488 2815 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:38:10.395574 kubelet[2815]: E0124 00:38:10.395525 2815 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-37\" not found" Jan 24 00:38:10.446991 systemd[1]: Created slice kubepods-burstable-podfee2dec01c817ac5e1a09f2465209ccd.slice - libcontainer container kubepods-burstable-podfee2dec01c817ac5e1a09f2465209ccd.slice. Jan 24 00:38:10.461659 kubelet[2815]: E0124 00:38:10.461413 2815 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-37\" not found" node="ip-172-31-23-37" Jan 24 00:38:10.464262 systemd[1]: Created slice kubepods-burstable-podfb24bc26ddee320394d08dcdb484b185.slice - libcontainer container kubepods-burstable-podfb24bc26ddee320394d08dcdb484b185.slice. Jan 24 00:38:10.473230 kubelet[2815]: E0124 00:38:10.473051 2815 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-37\" not found" node="ip-172-31-23-37" Jan 24 00:38:10.476522 systemd[1]: Created slice kubepods-burstable-podf1af3734b20f96524d841bc1bbfaaf72.slice - libcontainer container kubepods-burstable-podf1af3734b20f96524d841bc1bbfaaf72.slice. Jan 24 00:38:10.478730 kubelet[2815]: E0124 00:38:10.478702 2815 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-37\" not found" node="ip-172-31-23-37" Jan 24 00:38:10.496253 kubelet[2815]: I0124 00:38:10.496215 2815 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-37" Jan 24 00:38:10.496832 kubelet[2815]: E0124 00:38:10.496795 2815 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.37:6443/api/v1/nodes\": dial tcp 172.31.23.37:6443: connect: connection refused" node="ip-172-31-23-37" Jan 24 00:38:10.525692 kubelet[2815]: E0124 00:38:10.525633 2815 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-37?timeout=10s\": dial tcp 172.31.23.37:6443: connect: connection refused" interval="400ms" Jan 24 00:38:10.628323 kubelet[2815]: I0124 00:38:10.628266 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fee2dec01c817ac5e1a09f2465209ccd-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-37\" (UID: \"fee2dec01c817ac5e1a09f2465209ccd\") " pod="kube-system/kube-apiserver-ip-172-31-23-37" Jan 24 00:38:10.628323 kubelet[2815]: I0124 00:38:10.628305 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fb24bc26ddee320394d08dcdb484b185-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-37\" (UID: \"fb24bc26ddee320394d08dcdb484b185\") " pod="kube-system/kube-controller-manager-ip-172-31-23-37" Jan 24 00:38:10.628323 kubelet[2815]: I0124 00:38:10.628326 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fb24bc26ddee320394d08dcdb484b185-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-37\" (UID: \"fb24bc26ddee320394d08dcdb484b185\") " pod="kube-system/kube-controller-manager-ip-172-31-23-37" Jan 24 00:38:10.628323 kubelet[2815]: I0124 00:38:10.628341 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fb24bc26ddee320394d08dcdb484b185-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-37\" (UID: \"fb24bc26ddee320394d08dcdb484b185\") " pod="kube-system/kube-controller-manager-ip-172-31-23-37" Jan 24 00:38:10.628681 kubelet[2815]: I0124 00:38:10.628357 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f1af3734b20f96524d841bc1bbfaaf72-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-37\" (UID: \"f1af3734b20f96524d841bc1bbfaaf72\") " pod="kube-system/kube-scheduler-ip-172-31-23-37" Jan 24 00:38:10.628681 kubelet[2815]: I0124 00:38:10.628399 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fee2dec01c817ac5e1a09f2465209ccd-ca-certs\") pod \"kube-apiserver-ip-172-31-23-37\" (UID: \"fee2dec01c817ac5e1a09f2465209ccd\") " pod="kube-system/kube-apiserver-ip-172-31-23-37" Jan 24 00:38:10.628681 kubelet[2815]: I0124 00:38:10.628416 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fb24bc26ddee320394d08dcdb484b185-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-37\" (UID: \"fb24bc26ddee320394d08dcdb484b185\") " pod="kube-system/kube-controller-manager-ip-172-31-23-37" Jan 24 00:38:10.628681 kubelet[2815]: I0124 00:38:10.628433 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fb24bc26ddee320394d08dcdb484b185-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-37\" (UID: \"fb24bc26ddee320394d08dcdb484b185\") " pod="kube-system/kube-controller-manager-ip-172-31-23-37" Jan 24 00:38:10.628681 kubelet[2815]: I0124 00:38:10.628450 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fee2dec01c817ac5e1a09f2465209ccd-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-37\" (UID: \"fee2dec01c817ac5e1a09f2465209ccd\") " pod="kube-system/kube-apiserver-ip-172-31-23-37" Jan 24 00:38:10.698984 kubelet[2815]: I0124 00:38:10.698945 2815 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-37" Jan 24 00:38:10.699398 kubelet[2815]: E0124 00:38:10.699295 2815 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.37:6443/api/v1/nodes\": dial tcp 172.31.23.37:6443: connect: connection refused" node="ip-172-31-23-37" Jan 24 00:38:10.767582 containerd[1988]: time="2026-01-24T00:38:10.767519321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-37,Uid:fee2dec01c817ac5e1a09f2465209ccd,Namespace:kube-system,Attempt:0,}" Jan 24 00:38:10.773958 containerd[1988]: time="2026-01-24T00:38:10.773919244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-37,Uid:fb24bc26ddee320394d08dcdb484b185,Namespace:kube-system,Attempt:0,}" Jan 24 00:38:10.779969 containerd[1988]: time="2026-01-24T00:38:10.779910436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-37,Uid:f1af3734b20f96524d841bc1bbfaaf72,Namespace:kube-system,Attempt:0,}" Jan 24 00:38:10.926335 kubelet[2815]: E0124 00:38:10.926291 2815 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-37?timeout=10s\": dial tcp 172.31.23.37:6443: connect: connection refused" interval="800ms" Jan 24 00:38:11.101449 kubelet[2815]: I0124 00:38:11.101310 2815 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-37" Jan 24 00:38:11.101983 kubelet[2815]: E0124 00:38:11.101945 2815 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.37:6443/api/v1/nodes\": dial tcp 172.31.23.37:6443: connect: connection refused" node="ip-172-31-23-37" Jan 24 00:38:11.244047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3283127823.mount: Deactivated successfully. Jan 24 00:38:11.252455 containerd[1988]: time="2026-01-24T00:38:11.252407541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:38:11.255001 containerd[1988]: time="2026-01-24T00:38:11.254946865Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:38:11.256064 containerd[1988]: time="2026-01-24T00:38:11.256026892Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:38:11.258066 containerd[1988]: time="2026-01-24T00:38:11.258012301Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:38:11.259322 containerd[1988]: time="2026-01-24T00:38:11.259286462Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:38:11.260116 containerd[1988]: time="2026-01-24T00:38:11.260076494Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 24 00:38:11.261927 containerd[1988]: time="2026-01-24T00:38:11.261336800Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:38:11.264253 containerd[1988]: time="2026-01-24T00:38:11.263340365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:38:11.264253 containerd[1988]: time="2026-01-24T00:38:11.264076432Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 490.089713ms" Jan 24 00:38:11.266489 containerd[1988]: time="2026-01-24T00:38:11.266452202Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 486.471736ms" Jan 24 00:38:11.272846 containerd[1988]: time="2026-01-24T00:38:11.272790145Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 505.176876ms" Jan 24 00:38:11.409284 kubelet[2815]: W0124 00:38:11.407474 2815 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-37&limit=500&resourceVersion=0": dial tcp 172.31.23.37:6443: connect: connection refused Jan 24 00:38:11.409284 kubelet[2815]: E0124 00:38:11.407559 2815 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.23.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-37&limit=500&resourceVersion=0\": dial tcp 172.31.23.37:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:38:11.414849 kubelet[2815]: W0124 00:38:11.414782 2815 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.23.37:6443: connect: connection refused Jan 24 00:38:11.414977 kubelet[2815]: E0124 00:38:11.414856 2815 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.23.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.37:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:38:11.502172 containerd[1988]: time="2026-01-24T00:38:11.502065052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:38:11.502172 containerd[1988]: time="2026-01-24T00:38:11.502143126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:38:11.504670 containerd[1988]: time="2026-01-24T00:38:11.502462404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:38:11.504670 containerd[1988]: time="2026-01-24T00:38:11.503231070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:38:11.505427 containerd[1988]: time="2026-01-24T00:38:11.505299171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:38:11.505427 containerd[1988]: time="2026-01-24T00:38:11.505350991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:38:11.505427 containerd[1988]: time="2026-01-24T00:38:11.505365800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:38:11.506805 containerd[1988]: time="2026-01-24T00:38:11.505479133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:38:11.507693 containerd[1988]: time="2026-01-24T00:38:11.507581009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:38:11.507884 containerd[1988]: time="2026-01-24T00:38:11.507823440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:38:11.508082 containerd[1988]: time="2026-01-24T00:38:11.507870880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:38:11.508811 containerd[1988]: time="2026-01-24T00:38:11.508260837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:38:11.520035 kubelet[2815]: W0124 00:38:11.488859 2815 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.37:6443: connect: connection refused Jan 24 00:38:11.520226 kubelet[2815]: E0124 00:38:11.520046 2815 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.23.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.37:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:38:11.543092 kubelet[2815]: W0124 00:38:11.542980 2815 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.37:6443: connect: connection refused Jan 24 00:38:11.543092 kubelet[2815]: E0124 00:38:11.543049 2815 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.23.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.37:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:38:11.560583 systemd[1]: Started cri-containerd-3cf53f07c3bde92b3febba9deb56aeb5f1fb88deae9ca759661f62f87b998adb.scope - libcontainer container 3cf53f07c3bde92b3febba9deb56aeb5f1fb88deae9ca759661f62f87b998adb. Jan 24 00:38:11.570854 systemd[1]: Started cri-containerd-363d757182b6920e1435330f22df484e4ebb1d114ad594aacafb782dda771362.scope - libcontainer container 363d757182b6920e1435330f22df484e4ebb1d114ad594aacafb782dda771362. Jan 24 00:38:11.574528 systemd[1]: Started cri-containerd-506fc4f3654f7ee79776b624e1518e5dd78ab37f39257947760b4b1b94327d96.scope - libcontainer container 506fc4f3654f7ee79776b624e1518e5dd78ab37f39257947760b4b1b94327d96. Jan 24 00:38:11.663906 containerd[1988]: time="2026-01-24T00:38:11.663737883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-37,Uid:fee2dec01c817ac5e1a09f2465209ccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cf53f07c3bde92b3febba9deb56aeb5f1fb88deae9ca759661f62f87b998adb\"" Jan 24 00:38:11.671314 containerd[1988]: time="2026-01-24T00:38:11.671161891Z" level=info msg="CreateContainer within sandbox \"3cf53f07c3bde92b3febba9deb56aeb5f1fb88deae9ca759661f62f87b998adb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 00:38:11.698881 containerd[1988]: time="2026-01-24T00:38:11.698833256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-37,Uid:fb24bc26ddee320394d08dcdb484b185,Namespace:kube-system,Attempt:0,} returns sandbox id \"363d757182b6920e1435330f22df484e4ebb1d114ad594aacafb782dda771362\"" Jan 24 00:38:11.703164 containerd[1988]: time="2026-01-24T00:38:11.702979083Z" level=info msg="CreateContainer within sandbox \"363d757182b6920e1435330f22df484e4ebb1d114ad594aacafb782dda771362\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 00:38:11.707935 containerd[1988]: time="2026-01-24T00:38:11.707903915Z" level=info msg="CreateContainer within sandbox \"3cf53f07c3bde92b3febba9deb56aeb5f1fb88deae9ca759661f62f87b998adb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"38dfedbdd1c3ee8ee74ef42aa73af61b99c10b96871ba422b50e163b64981251\"" Jan 24 00:38:11.709406 containerd[1988]: time="2026-01-24T00:38:11.709127726Z" level=info msg="StartContainer for \"38dfedbdd1c3ee8ee74ef42aa73af61b99c10b96871ba422b50e163b64981251\"" Jan 24 00:38:11.709406 containerd[1988]: time="2026-01-24T00:38:11.709299989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-37,Uid:f1af3734b20f96524d841bc1bbfaaf72,Namespace:kube-system,Attempt:0,} returns sandbox id \"506fc4f3654f7ee79776b624e1518e5dd78ab37f39257947760b4b1b94327d96\"" Jan 24 00:38:11.715577 containerd[1988]: time="2026-01-24T00:38:11.715415584Z" level=info msg="CreateContainer within sandbox \"506fc4f3654f7ee79776b624e1518e5dd78ab37f39257947760b4b1b94327d96\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 00:38:11.731106 containerd[1988]: time="2026-01-24T00:38:11.730737936Z" level=info msg="CreateContainer within sandbox \"363d757182b6920e1435330f22df484e4ebb1d114ad594aacafb782dda771362\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9242c20583de1bcd07c78186e96ebfa674c5f7e940dd125894787d6949a098bd\"" Jan 24 00:38:11.732306 kubelet[2815]: E0124 00:38:11.732241 2815 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-37?timeout=10s\": dial tcp 172.31.23.37:6443: connect: connection refused" interval="1.6s" Jan 24 00:38:11.733392 containerd[1988]: time="2026-01-24T00:38:11.732635446Z" level=info msg="StartContainer for \"9242c20583de1bcd07c78186e96ebfa674c5f7e940dd125894787d6949a098bd\"" Jan 24 00:38:11.749614 systemd[1]: Started cri-containerd-38dfedbdd1c3ee8ee74ef42aa73af61b99c10b96871ba422b50e163b64981251.scope - libcontainer container 38dfedbdd1c3ee8ee74ef42aa73af61b99c10b96871ba422b50e163b64981251. Jan 24 00:38:11.760027 containerd[1988]: time="2026-01-24T00:38:11.759978152Z" level=info msg="CreateContainer within sandbox \"506fc4f3654f7ee79776b624e1518e5dd78ab37f39257947760b4b1b94327d96\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"db7baade39aa8b062cd6689b241d9392b393c1df7124038e6aa433ca8c9a4bef\"" Jan 24 00:38:11.761612 containerd[1988]: time="2026-01-24T00:38:11.761574751Z" level=info msg="StartContainer for \"db7baade39aa8b062cd6689b241d9392b393c1df7124038e6aa433ca8c9a4bef\"" Jan 24 00:38:11.782684 systemd[1]: Started cri-containerd-9242c20583de1bcd07c78186e96ebfa674c5f7e940dd125894787d6949a098bd.scope - libcontainer container 9242c20583de1bcd07c78186e96ebfa674c5f7e940dd125894787d6949a098bd. Jan 24 00:38:11.818588 systemd[1]: Started cri-containerd-db7baade39aa8b062cd6689b241d9392b393c1df7124038e6aa433ca8c9a4bef.scope - libcontainer container db7baade39aa8b062cd6689b241d9392b393c1df7124038e6aa433ca8c9a4bef. Jan 24 00:38:11.854137 containerd[1988]: time="2026-01-24T00:38:11.853770701Z" level=info msg="StartContainer for \"38dfedbdd1c3ee8ee74ef42aa73af61b99c10b96871ba422b50e163b64981251\" returns successfully" Jan 24 00:38:11.898712 containerd[1988]: time="2026-01-24T00:38:11.898570484Z" level=info msg="StartContainer for \"9242c20583de1bcd07c78186e96ebfa674c5f7e940dd125894787d6949a098bd\" returns successfully" Jan 24 00:38:11.903059 containerd[1988]: time="2026-01-24T00:38:11.902400707Z" level=info msg="StartContainer for \"db7baade39aa8b062cd6689b241d9392b393c1df7124038e6aa433ca8c9a4bef\" returns successfully" Jan 24 00:38:11.904331 kubelet[2815]: I0124 00:38:11.904218 2815 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-37" Jan 24 00:38:11.904937 kubelet[2815]: E0124 00:38:11.904880 2815 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.37:6443/api/v1/nodes\": dial tcp 172.31.23.37:6443: connect: connection refused" node="ip-172-31-23-37" Jan 24 00:38:12.322729 kubelet[2815]: E0124 00:38:12.322683 2815 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.23.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.23.37:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:38:12.363761 kubelet[2815]: E0124 00:38:12.363725 2815 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-37\" not found" node="ip-172-31-23-37" Jan 24 00:38:12.368940 kubelet[2815]: E0124 00:38:12.368727 2815 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-37\" not found" node="ip-172-31-23-37" Jan 24 00:38:12.371260 kubelet[2815]: E0124 00:38:12.371230 2815 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-37\" not found" node="ip-172-31-23-37" Jan 24 00:38:13.332844 kubelet[2815]: E0124 00:38:13.332795 2815 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-37?timeout=10s\": dial tcp 172.31.23.37:6443: connect: connection refused" interval="3.2s" Jan 24 00:38:13.350917 kubelet[2815]: W0124 00:38:13.350873 2815 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.37:6443: connect: connection refused Jan 24 00:38:13.351069 kubelet[2815]: E0124 00:38:13.350940 2815 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.23.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.37:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:38:13.374932 kubelet[2815]: E0124 00:38:13.374894 2815 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-37\" not found" node="ip-172-31-23-37" Jan 24 00:38:13.375704 kubelet[2815]: E0124 00:38:13.375678 2815 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-37\" not found" node="ip-172-31-23-37" Jan 24 00:38:13.508901 kubelet[2815]: I0124 00:38:13.508448 2815 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-37" Jan 24 00:38:13.508901 kubelet[2815]: E0124 00:38:13.508789 2815 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.37:6443/api/v1/nodes\": dial tcp 172.31.23.37:6443: connect: connection refused" node="ip-172-31-23-37" Jan 24 00:38:14.377993 kubelet[2815]: E0124 00:38:14.377807 2815 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-37\" not found" node="ip-172-31-23-37" Jan 24 00:38:15.442845 kubelet[2815]: E0124 00:38:15.442807 2815 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-37\" not found" node="ip-172-31-23-37" Jan 24 00:38:16.296397 kubelet[2815]: I0124 00:38:16.296323 2815 apiserver.go:52] "Watching apiserver" Jan 24 00:38:16.327592 kubelet[2815]: I0124 00:38:16.327545 2815 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:38:16.517604 kubelet[2815]: E0124 00:38:16.517568 2815 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-23-37" not found Jan 24 00:38:16.536345 kubelet[2815]: E0124 00:38:16.536309 2815 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-37\" not found" node="ip-172-31-23-37" Jan 24 00:38:16.711691 kubelet[2815]: I0124 00:38:16.710597 2815 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-37" Jan 24 00:38:16.722639 kubelet[2815]: I0124 00:38:16.722581 2815 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-23-37" Jan 24 00:38:16.726328 kubelet[2815]: I0124 00:38:16.724038 2815 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-23-37" Jan 24 00:38:16.729426 kubelet[2815]: E0124 00:38:16.729257 2815 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-23-37\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-23-37" Jan 24 00:38:16.729426 kubelet[2815]: I0124 00:38:16.729295 2815 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-37" Jan 24 00:38:16.731505 kubelet[2815]: E0124 00:38:16.731459 2815 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-23-37\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-23-37" Jan 24 00:38:16.731913 kubelet[2815]: I0124 00:38:16.731578 2815 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-23-37" Jan 24 00:38:16.735174 kubelet[2815]: E0124 00:38:16.735141 2815 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-23-37\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-23-37" Jan 24 00:38:18.294804 systemd[1]: Reloading requested from client PID 3094 ('systemctl') (unit session-7.scope)... Jan 24 00:38:18.294823 systemd[1]: Reloading... Jan 24 00:38:18.331403 kubelet[2815]: I0124 00:38:18.329106 2815 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-23-37" Jan 24 00:38:18.399503 zram_generator::config[3133]: No configuration found. Jan 24 00:38:18.546668 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:38:18.648522 systemd[1]: Reloading finished in 353 ms. Jan 24 00:38:18.690145 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:38:18.698053 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:38:18.698300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:38:18.705551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:38:19.033271 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:38:19.043895 (kubelet)[3194]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:38:19.146604 kubelet[3194]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:38:19.146604 kubelet[3194]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:38:19.146604 kubelet[3194]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:38:19.148504 kubelet[3194]: I0124 00:38:19.148191 3194 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:38:19.157717 kubelet[3194]: I0124 00:38:19.157670 3194 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:38:19.157717 kubelet[3194]: I0124 00:38:19.157708 3194 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:38:19.163017 kubelet[3194]: I0124 00:38:19.162298 3194 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:38:19.169277 kubelet[3194]: I0124 00:38:19.169233 3194 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 24 00:38:19.182619 kubelet[3194]: I0124 00:38:19.182443 3194 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:38:19.194209 kubelet[3194]: E0124 00:38:19.194069 3194 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:38:19.194209 kubelet[3194]: I0124 00:38:19.194211 3194 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:38:19.197555 kubelet[3194]: I0124 00:38:19.197518 3194 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:38:19.197781 kubelet[3194]: I0124 00:38:19.197716 3194 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:38:19.197921 kubelet[3194]: I0124 00:38:19.197749 3194 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-37","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:38:19.198039 kubelet[3194]: I0124 00:38:19.197924 3194 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:38:19.198039 kubelet[3194]: I0124 00:38:19.197937 3194 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:38:19.198039 kubelet[3194]: I0124 00:38:19.197979 3194 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:38:19.200049 kubelet[3194]: I0124 00:38:19.199896 3194 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:38:19.200049 kubelet[3194]: I0124 00:38:19.199941 3194 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:38:19.200049 kubelet[3194]: I0124 00:38:19.199987 3194 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:38:19.200049 kubelet[3194]: I0124 00:38:19.200000 3194 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:38:19.220710 kubelet[3194]: I0124 00:38:19.220481 3194 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:38:19.222113 kubelet[3194]: I0124 00:38:19.222094 3194 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:38:19.223744 kubelet[3194]: I0124 00:38:19.222669 3194 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:38:19.223744 kubelet[3194]: I0124 00:38:19.222701 3194 server.go:1287] "Started kubelet" Jan 24 00:38:19.225897 kubelet[3194]: I0124 00:38:19.225844 3194 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:38:19.231304 kubelet[3194]: I0124 00:38:19.231264 3194 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:38:19.233120 kubelet[3194]: I0124 00:38:19.233064 3194 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:38:19.236243 kubelet[3194]: I0124 00:38:19.235682 3194 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:38:19.236243 kubelet[3194]: I0124 00:38:19.235887 3194 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:38:19.243174 kubelet[3194]: I0124 00:38:19.242277 3194 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:38:19.246355 kubelet[3194]: I0124 00:38:19.245180 3194 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:38:19.246355 kubelet[3194]: I0124 00:38:19.245318 3194 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:38:19.249061 kubelet[3194]: I0124 00:38:19.248069 3194 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:38:19.251346 kubelet[3194]: I0124 00:38:19.251213 3194 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:38:19.253358 kubelet[3194]: I0124 00:38:19.253322 3194 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:38:19.254520 kubelet[3194]: I0124 00:38:19.254493 3194 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:38:19.254703 kubelet[3194]: I0124 00:38:19.254537 3194 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:38:19.254703 kubelet[3194]: I0124 00:38:19.254546 3194 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:38:19.254703 kubelet[3194]: E0124 00:38:19.254597 3194 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:38:19.256508 kubelet[3194]: E0124 00:38:19.256399 3194 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:38:19.257635 kubelet[3194]: I0124 00:38:19.257584 3194 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:38:19.257808 kubelet[3194]: I0124 00:38:19.257792 3194 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:38:19.265598 kubelet[3194]: I0124 00:38:19.265570 3194 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:38:19.337132 kubelet[3194]: I0124 00:38:19.337034 3194 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:38:19.338583 kubelet[3194]: I0124 00:38:19.337330 3194 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:38:19.338583 kubelet[3194]: I0124 00:38:19.337366 3194 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:38:19.338583 kubelet[3194]: I0124 00:38:19.337615 3194 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 00:38:19.338583 kubelet[3194]: I0124 00:38:19.337631 3194 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 00:38:19.338583 kubelet[3194]: I0124 00:38:19.337657 3194 policy_none.go:49] "None policy: Start" Jan 24 00:38:19.338583 kubelet[3194]: I0124 00:38:19.337670 3194 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:38:19.338583 kubelet[3194]: I0124 00:38:19.337684 3194 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:38:19.338583 kubelet[3194]: I0124 00:38:19.337839 3194 state_mem.go:75] "Updated machine memory state" Jan 24 00:38:19.347218 kubelet[3194]: I0124 00:38:19.347188 3194 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:38:19.347497 kubelet[3194]: I0124 00:38:19.347478 3194 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:38:19.347596 kubelet[3194]: I0124 00:38:19.347502 3194 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:38:19.349452 kubelet[3194]: I0124 00:38:19.349428 3194 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:38:19.350290 kubelet[3194]: E0124 00:38:19.350264 3194 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:38:19.361880 kubelet[3194]: I0124 00:38:19.361284 3194 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-37" Jan 24 00:38:19.362989 kubelet[3194]: I0124 00:38:19.362442 3194 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-23-37" Jan 24 00:38:19.362989 kubelet[3194]: I0124 00:38:19.362865 3194 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-23-37" Jan 24 00:38:19.379882 kubelet[3194]: E0124 00:38:19.379841 3194 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-23-37\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-23-37" Jan 24 00:38:19.462567 kubelet[3194]: I0124 00:38:19.461675 3194 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-37" Jan 24 00:38:19.474718 kubelet[3194]: I0124 00:38:19.474669 3194 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-23-37" Jan 24 00:38:19.474929 kubelet[3194]: I0124 00:38:19.474764 3194 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-23-37" Jan 24 00:38:19.546603 kubelet[3194]: I0124 00:38:19.545872 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fee2dec01c817ac5e1a09f2465209ccd-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-37\" (UID: \"fee2dec01c817ac5e1a09f2465209ccd\") " pod="kube-system/kube-apiserver-ip-172-31-23-37" Jan 24 00:38:19.546603 kubelet[3194]: I0124 00:38:19.545915 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fb24bc26ddee320394d08dcdb484b185-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-37\" (UID: \"fb24bc26ddee320394d08dcdb484b185\") " pod="kube-system/kube-controller-manager-ip-172-31-23-37" Jan 24 00:38:19.546603 kubelet[3194]: I0124 00:38:19.545939 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fb24bc26ddee320394d08dcdb484b185-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-37\" (UID: \"fb24bc26ddee320394d08dcdb484b185\") " pod="kube-system/kube-controller-manager-ip-172-31-23-37" Jan 24 00:38:19.546603 kubelet[3194]: I0124 00:38:19.545956 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fb24bc26ddee320394d08dcdb484b185-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-37\" (UID: \"fb24bc26ddee320394d08dcdb484b185\") " pod="kube-system/kube-controller-manager-ip-172-31-23-37" Jan 24 00:38:19.546603 kubelet[3194]: I0124 00:38:19.545971 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f1af3734b20f96524d841bc1bbfaaf72-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-37\" (UID: \"f1af3734b20f96524d841bc1bbfaaf72\") " pod="kube-system/kube-scheduler-ip-172-31-23-37" Jan 24 00:38:19.546922 kubelet[3194]: I0124 00:38:19.545986 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fee2dec01c817ac5e1a09f2465209ccd-ca-certs\") pod \"kube-apiserver-ip-172-31-23-37\" (UID: \"fee2dec01c817ac5e1a09f2465209ccd\") " pod="kube-system/kube-apiserver-ip-172-31-23-37" Jan 24 00:38:19.546922 kubelet[3194]: I0124 00:38:19.546007 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fee2dec01c817ac5e1a09f2465209ccd-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-37\" (UID: \"fee2dec01c817ac5e1a09f2465209ccd\") " pod="kube-system/kube-apiserver-ip-172-31-23-37" Jan 24 00:38:19.546922 kubelet[3194]: I0124 00:38:19.546040 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fb24bc26ddee320394d08dcdb484b185-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-37\" (UID: \"fb24bc26ddee320394d08dcdb484b185\") " pod="kube-system/kube-controller-manager-ip-172-31-23-37" Jan 24 00:38:19.546922 kubelet[3194]: I0124 00:38:19.546067 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fb24bc26ddee320394d08dcdb484b185-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-37\" (UID: \"fb24bc26ddee320394d08dcdb484b185\") " pod="kube-system/kube-controller-manager-ip-172-31-23-37" Jan 24 00:38:20.213634 kubelet[3194]: I0124 00:38:20.213516 3194 apiserver.go:52] "Watching apiserver" Jan 24 00:38:20.246564 kubelet[3194]: I0124 00:38:20.246307 3194 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:38:20.302814 kubelet[3194]: I0124 00:38:20.302783 3194 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-37" Jan 24 00:38:20.312179 kubelet[3194]: E0124 00:38:20.311907 3194 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-23-37\" already exists" pod="kube-system/kube-apiserver-ip-172-31-23-37" Jan 24 00:38:20.365401 kubelet[3194]: I0124 00:38:20.365311 3194 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-37" podStartSLOduration=2.365289157 podStartE2EDuration="2.365289157s" podCreationTimestamp="2026-01-24 00:38:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:38:20.350620894 +0000 UTC m=+1.271667683" watchObservedRunningTime="2026-01-24 00:38:20.365289157 +0000 UTC m=+1.286335944" Jan 24 00:38:20.378547 kubelet[3194]: I0124 00:38:20.377995 3194 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-37" podStartSLOduration=1.377973879 podStartE2EDuration="1.377973879s" podCreationTimestamp="2026-01-24 00:38:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:38:20.365830205 +0000 UTC m=+1.286876994" watchObservedRunningTime="2026-01-24 00:38:20.377973879 +0000 UTC m=+1.299020668" Jan 24 00:38:20.392468 kubelet[3194]: I0124 00:38:20.392365 3194 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-37" podStartSLOduration=1.392342473 podStartE2EDuration="1.392342473s" podCreationTimestamp="2026-01-24 00:38:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:38:20.378737168 +0000 UTC m=+1.299783953" watchObservedRunningTime="2026-01-24 00:38:20.392342473 +0000 UTC m=+1.313389261" Jan 24 00:38:23.452077 update_engine[1969]: I20260124 00:38:23.451426 1969 update_attempter.cc:509] Updating boot flags... Jan 24 00:38:23.520302 kubelet[3194]: I0124 00:38:23.520267 3194 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 00:38:23.521131 kubelet[3194]: I0124 00:38:23.520975 3194 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 00:38:23.521185 containerd[1988]: time="2026-01-24T00:38:23.520647173Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:38:23.552490 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3254) Jan 24 00:38:23.735750 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3257) Jan 24 00:38:24.382909 systemd[1]: Created slice kubepods-besteffort-podbdb28f91_a4a9_4135_9595_b4c49964d880.slice - libcontainer container kubepods-besteffort-podbdb28f91_a4a9_4135_9595_b4c49964d880.slice. Jan 24 00:38:24.474936 kubelet[3194]: I0124 00:38:24.474809 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bdb28f91-a4a9-4135-9595-b4c49964d880-kube-proxy\") pod \"kube-proxy-vt57s\" (UID: \"bdb28f91-a4a9-4135-9595-b4c49964d880\") " pod="kube-system/kube-proxy-vt57s" Jan 24 00:38:24.474936 kubelet[3194]: I0124 00:38:24.474850 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bdb28f91-a4a9-4135-9595-b4c49964d880-xtables-lock\") pod \"kube-proxy-vt57s\" (UID: \"bdb28f91-a4a9-4135-9595-b4c49964d880\") " pod="kube-system/kube-proxy-vt57s" Jan 24 00:38:24.474936 kubelet[3194]: I0124 00:38:24.474870 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bdb28f91-a4a9-4135-9595-b4c49964d880-lib-modules\") pod \"kube-proxy-vt57s\" (UID: \"bdb28f91-a4a9-4135-9595-b4c49964d880\") " pod="kube-system/kube-proxy-vt57s" Jan 24 00:38:24.474936 kubelet[3194]: I0124 00:38:24.474886 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j674c\" (UniqueName: \"kubernetes.io/projected/bdb28f91-a4a9-4135-9595-b4c49964d880-kube-api-access-j674c\") pod \"kube-proxy-vt57s\" (UID: \"bdb28f91-a4a9-4135-9595-b4c49964d880\") " pod="kube-system/kube-proxy-vt57s" Jan 24 00:38:24.604524 systemd[1]: Created slice kubepods-besteffort-pod5d812358_6f1a_4e0b_b848_91179148da03.slice - libcontainer container kubepods-besteffort-pod5d812358_6f1a_4e0b_b848_91179148da03.slice. Jan 24 00:38:24.696195 containerd[1988]: time="2026-01-24T00:38:24.695837585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vt57s,Uid:bdb28f91-a4a9-4135-9595-b4c49964d880,Namespace:kube-system,Attempt:0,}" Jan 24 00:38:24.724527 containerd[1988]: time="2026-01-24T00:38:24.724415060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:38:24.724527 containerd[1988]: time="2026-01-24T00:38:24.724474723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:38:24.724527 containerd[1988]: time="2026-01-24T00:38:24.724490489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:38:24.724780 containerd[1988]: time="2026-01-24T00:38:24.724570996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:38:24.744045 systemd[1]: run-containerd-runc-k8s.io-aa2b1043560be6f952f517001b74b934b31bb99c8c2840a2a16c5a3812936a1e-runc.Fp39mU.mount: Deactivated successfully. Jan 24 00:38:24.753672 systemd[1]: Started cri-containerd-aa2b1043560be6f952f517001b74b934b31bb99c8c2840a2a16c5a3812936a1e.scope - libcontainer container aa2b1043560be6f952f517001b74b934b31bb99c8c2840a2a16c5a3812936a1e. Jan 24 00:38:24.776578 kubelet[3194]: I0124 00:38:24.776538 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghkvh\" (UniqueName: \"kubernetes.io/projected/5d812358-6f1a-4e0b-b848-91179148da03-kube-api-access-ghkvh\") pod \"tigera-operator-7dcd859c48-v4p6h\" (UID: \"5d812358-6f1a-4e0b-b848-91179148da03\") " pod="tigera-operator/tigera-operator-7dcd859c48-v4p6h" Jan 24 00:38:24.776578 kubelet[3194]: I0124 00:38:24.776583 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5d812358-6f1a-4e0b-b848-91179148da03-var-lib-calico\") pod \"tigera-operator-7dcd859c48-v4p6h\" (UID: \"5d812358-6f1a-4e0b-b848-91179148da03\") " pod="tigera-operator/tigera-operator-7dcd859c48-v4p6h" Jan 24 00:38:24.793440 containerd[1988]: time="2026-01-24T00:38:24.793345499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vt57s,Uid:bdb28f91-a4a9-4135-9595-b4c49964d880,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa2b1043560be6f952f517001b74b934b31bb99c8c2840a2a16c5a3812936a1e\"" Jan 24 00:38:24.800075 containerd[1988]: time="2026-01-24T00:38:24.800027107Z" level=info msg="CreateContainer within sandbox \"aa2b1043560be6f952f517001b74b934b31bb99c8c2840a2a16c5a3812936a1e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:38:24.828437 containerd[1988]: time="2026-01-24T00:38:24.828387995Z" level=info msg="CreateContainer within sandbox \"aa2b1043560be6f952f517001b74b934b31bb99c8c2840a2a16c5a3812936a1e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a663c56338d5dc457600805317478a30ce464c8a8313edaab61bac1a5a66db35\"" Jan 24 00:38:24.829218 containerd[1988]: time="2026-01-24T00:38:24.829190117Z" level=info msg="StartContainer for \"a663c56338d5dc457600805317478a30ce464c8a8313edaab61bac1a5a66db35\"" Jan 24 00:38:24.860601 systemd[1]: Started cri-containerd-a663c56338d5dc457600805317478a30ce464c8a8313edaab61bac1a5a66db35.scope - libcontainer container a663c56338d5dc457600805317478a30ce464c8a8313edaab61bac1a5a66db35. Jan 24 00:38:24.898029 containerd[1988]: time="2026-01-24T00:38:24.897946674Z" level=info msg="StartContainer for \"a663c56338d5dc457600805317478a30ce464c8a8313edaab61bac1a5a66db35\" returns successfully" Jan 24 00:38:24.909175 containerd[1988]: time="2026-01-24T00:38:24.909136584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-v4p6h,Uid:5d812358-6f1a-4e0b-b848-91179148da03,Namespace:tigera-operator,Attempt:0,}" Jan 24 00:38:24.935884 containerd[1988]: time="2026-01-24T00:38:24.935596017Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:38:24.935884 containerd[1988]: time="2026-01-24T00:38:24.935653184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:38:24.935884 containerd[1988]: time="2026-01-24T00:38:24.935668794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:38:24.935884 containerd[1988]: time="2026-01-24T00:38:24.935820687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:38:24.956624 systemd[1]: Started cri-containerd-e4cd27ff3925cd0003f492690cac16b8dcc0ecbb67ef02647d4a01d5eb45b56e.scope - libcontainer container e4cd27ff3925cd0003f492690cac16b8dcc0ecbb67ef02647d4a01d5eb45b56e. Jan 24 00:38:25.002955 containerd[1988]: time="2026-01-24T00:38:25.002797937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-v4p6h,Uid:5d812358-6f1a-4e0b-b848-91179148da03,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e4cd27ff3925cd0003f492690cac16b8dcc0ecbb67ef02647d4a01d5eb45b56e\"" Jan 24 00:38:25.005964 containerd[1988]: time="2026-01-24T00:38:25.005934903Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 24 00:38:25.324672 kubelet[3194]: I0124 00:38:25.324366 3194 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vt57s" podStartSLOduration=1.324350821 podStartE2EDuration="1.324350821s" podCreationTimestamp="2026-01-24 00:38:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:38:25.324313898 +0000 UTC m=+6.245360686" watchObservedRunningTime="2026-01-24 00:38:25.324350821 +0000 UTC m=+6.245397608" Jan 24 00:38:26.135185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1897103985.mount: Deactivated successfully. Jan 24 00:38:27.079650 containerd[1988]: time="2026-01-24T00:38:27.079587561Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:38:27.081006 containerd[1988]: time="2026-01-24T00:38:27.080722367Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 24 00:38:27.082416 containerd[1988]: time="2026-01-24T00:38:27.082355573Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:38:27.085750 containerd[1988]: time="2026-01-24T00:38:27.085014024Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:38:27.085750 containerd[1988]: time="2026-01-24T00:38:27.085631327Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.079346937s" Jan 24 00:38:27.085750 containerd[1988]: time="2026-01-24T00:38:27.085660732Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 24 00:38:27.088444 containerd[1988]: time="2026-01-24T00:38:27.088405828Z" level=info msg="CreateContainer within sandbox \"e4cd27ff3925cd0003f492690cac16b8dcc0ecbb67ef02647d4a01d5eb45b56e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 24 00:38:27.105068 containerd[1988]: time="2026-01-24T00:38:27.105024850Z" level=info msg="CreateContainer within sandbox \"e4cd27ff3925cd0003f492690cac16b8dcc0ecbb67ef02647d4a01d5eb45b56e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6a61826e312e1716b9b86b9edad7dd10e6486ee71f69d9d6e67ef867895bd393\"" Jan 24 00:38:27.105721 containerd[1988]: time="2026-01-24T00:38:27.105678929Z" level=info msg="StartContainer for \"6a61826e312e1716b9b86b9edad7dd10e6486ee71f69d9d6e67ef867895bd393\"" Jan 24 00:38:27.136123 systemd[1]: run-containerd-runc-k8s.io-6a61826e312e1716b9b86b9edad7dd10e6486ee71f69d9d6e67ef867895bd393-runc.CHS0iR.mount: Deactivated successfully. Jan 24 00:38:27.145709 systemd[1]: Started cri-containerd-6a61826e312e1716b9b86b9edad7dd10e6486ee71f69d9d6e67ef867895bd393.scope - libcontainer container 6a61826e312e1716b9b86b9edad7dd10e6486ee71f69d9d6e67ef867895bd393. Jan 24 00:38:27.178998 containerd[1988]: time="2026-01-24T00:38:27.178946795Z" level=info msg="StartContainer for \"6a61826e312e1716b9b86b9edad7dd10e6486ee71f69d9d6e67ef867895bd393\" returns successfully" Jan 24 00:38:27.586971 kubelet[3194]: I0124 00:38:27.586552 3194 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-v4p6h" podStartSLOduration=1.504211019 podStartE2EDuration="3.586524981s" podCreationTimestamp="2026-01-24 00:38:24 +0000 UTC" firstStartedPulling="2026-01-24 00:38:25.004548977 +0000 UTC m=+5.925595966" lastFinishedPulling="2026-01-24 00:38:27.086863163 +0000 UTC m=+8.007909928" observedRunningTime="2026-01-24 00:38:27.329972281 +0000 UTC m=+8.251019069" watchObservedRunningTime="2026-01-24 00:38:27.586524981 +0000 UTC m=+8.507571788" Jan 24 00:38:33.865804 sudo[2312]: pam_unix(sudo:session): session closed for user root Jan 24 00:38:33.949615 sshd[2309]: pam_unix(sshd:session): session closed for user core Jan 24 00:38:33.954751 systemd[1]: sshd@6-172.31.23.37:22-4.153.228.146:54962.service: Deactivated successfully. Jan 24 00:38:33.961221 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:38:33.962039 systemd[1]: session-7.scope: Consumed 5.446s CPU time, 142.7M memory peak, 0B memory swap peak. Jan 24 00:38:33.963899 systemd-logind[1968]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:38:33.967208 systemd-logind[1968]: Removed session 7. Jan 24 00:38:40.942101 systemd[1]: Created slice kubepods-besteffort-pod2efdcb91_e2ec_460b_82b6_f7336e72a6de.slice - libcontainer container kubepods-besteffort-pod2efdcb91_e2ec_460b_82b6_f7336e72a6de.slice. Jan 24 00:38:40.992306 kubelet[3194]: I0124 00:38:40.992186 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2efdcb91-e2ec-460b-82b6-f7336e72a6de-typha-certs\") pod \"calico-typha-667849bf7d-8pphh\" (UID: \"2efdcb91-e2ec-460b-82b6-f7336e72a6de\") " pod="calico-system/calico-typha-667849bf7d-8pphh" Jan 24 00:38:40.992306 kubelet[3194]: I0124 00:38:40.992234 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2efdcb91-e2ec-460b-82b6-f7336e72a6de-tigera-ca-bundle\") pod \"calico-typha-667849bf7d-8pphh\" (UID: \"2efdcb91-e2ec-460b-82b6-f7336e72a6de\") " pod="calico-system/calico-typha-667849bf7d-8pphh" Jan 24 00:38:40.992306 kubelet[3194]: I0124 00:38:40.992261 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgl8q\" (UniqueName: \"kubernetes.io/projected/2efdcb91-e2ec-460b-82b6-f7336e72a6de-kube-api-access-qgl8q\") pod \"calico-typha-667849bf7d-8pphh\" (UID: \"2efdcb91-e2ec-460b-82b6-f7336e72a6de\") " pod="calico-system/calico-typha-667849bf7d-8pphh" Jan 24 00:38:41.079851 kubelet[3194]: I0124 00:38:41.079799 3194 status_manager.go:890] "Failed to get status for pod" podUID="c59909eb-a37c-4811-9727-9afe92768ce1" pod="calico-system/calico-node-zgkks" err="pods \"calico-node-zgkks\" is forbidden: User \"system:node:ip-172-31-23-37\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-23-37' and this object" Jan 24 00:38:41.082106 systemd[1]: Created slice kubepods-besteffort-podc59909eb_a37c_4811_9727_9afe92768ce1.slice - libcontainer container kubepods-besteffort-podc59909eb_a37c_4811_9727_9afe92768ce1.slice. Jan 24 00:38:41.093196 kubelet[3194]: I0124 00:38:41.093149 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c59909eb-a37c-4811-9727-9afe92768ce1-flexvol-driver-host\") pod \"calico-node-zgkks\" (UID: \"c59909eb-a37c-4811-9727-9afe92768ce1\") " pod="calico-system/calico-node-zgkks" Jan 24 00:38:41.093196 kubelet[3194]: I0124 00:38:41.093194 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c59909eb-a37c-4811-9727-9afe92768ce1-var-lib-calico\") pod \"calico-node-zgkks\" (UID: \"c59909eb-a37c-4811-9727-9afe92768ce1\") " pod="calico-system/calico-node-zgkks" Jan 24 00:38:41.093427 kubelet[3194]: I0124 00:38:41.093233 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x9nh\" (UniqueName: \"kubernetes.io/projected/c59909eb-a37c-4811-9727-9afe92768ce1-kube-api-access-6x9nh\") pod \"calico-node-zgkks\" (UID: \"c59909eb-a37c-4811-9727-9afe92768ce1\") " pod="calico-system/calico-node-zgkks" Jan 24 00:38:41.093427 kubelet[3194]: I0124 00:38:41.093263 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c59909eb-a37c-4811-9727-9afe92768ce1-xtables-lock\") pod \"calico-node-zgkks\" (UID: \"c59909eb-a37c-4811-9727-9afe92768ce1\") " pod="calico-system/calico-node-zgkks" Jan 24 00:38:41.094218 kubelet[3194]: I0124 00:38:41.094140 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c59909eb-a37c-4811-9727-9afe92768ce1-policysync\") pod \"calico-node-zgkks\" (UID: \"c59909eb-a37c-4811-9727-9afe92768ce1\") " pod="calico-system/calico-node-zgkks" Jan 24 00:38:41.094218 kubelet[3194]: I0124 00:38:41.094188 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c59909eb-a37c-4811-9727-9afe92768ce1-var-run-calico\") pod \"calico-node-zgkks\" (UID: \"c59909eb-a37c-4811-9727-9afe92768ce1\") " pod="calico-system/calico-node-zgkks" Jan 24 00:38:41.094363 kubelet[3194]: I0124 00:38:41.094249 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c59909eb-a37c-4811-9727-9afe92768ce1-node-certs\") pod \"calico-node-zgkks\" (UID: \"c59909eb-a37c-4811-9727-9afe92768ce1\") " pod="calico-system/calico-node-zgkks" Jan 24 00:38:41.094363 kubelet[3194]: I0124 00:38:41.094266 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c59909eb-a37c-4811-9727-9afe92768ce1-cni-bin-dir\") pod \"calico-node-zgkks\" (UID: \"c59909eb-a37c-4811-9727-9afe92768ce1\") " pod="calico-system/calico-node-zgkks" Jan 24 00:38:41.094363 kubelet[3194]: I0124 00:38:41.094286 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c59909eb-a37c-4811-9727-9afe92768ce1-lib-modules\") pod \"calico-node-zgkks\" (UID: \"c59909eb-a37c-4811-9727-9afe92768ce1\") " pod="calico-system/calico-node-zgkks" Jan 24 00:38:41.094363 kubelet[3194]: I0124 00:38:41.094301 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c59909eb-a37c-4811-9727-9afe92768ce1-cni-log-dir\") pod \"calico-node-zgkks\" (UID: \"c59909eb-a37c-4811-9727-9afe92768ce1\") " pod="calico-system/calico-node-zgkks" Jan 24 00:38:41.094363 kubelet[3194]: I0124 00:38:41.094332 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c59909eb-a37c-4811-9727-9afe92768ce1-cni-net-dir\") pod \"calico-node-zgkks\" (UID: \"c59909eb-a37c-4811-9727-9afe92768ce1\") " pod="calico-system/calico-node-zgkks" Jan 24 00:38:41.096013 kubelet[3194]: I0124 00:38:41.094350 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c59909eb-a37c-4811-9727-9afe92768ce1-tigera-ca-bundle\") pod \"calico-node-zgkks\" (UID: \"c59909eb-a37c-4811-9727-9afe92768ce1\") " pod="calico-system/calico-node-zgkks" Jan 24 00:38:41.192168 kubelet[3194]: E0124 00:38:41.191580 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g8z2m" podUID="08028277-ca96-466b-b85d-b33e87d62943" Jan 24 00:38:41.238687 kubelet[3194]: E0124 00:38:41.236781 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.238687 kubelet[3194]: W0124 00:38:41.236817 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.241400 kubelet[3194]: E0124 00:38:41.241038 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.251485 containerd[1988]: time="2026-01-24T00:38:41.251364205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-667849bf7d-8pphh,Uid:2efdcb91-e2ec-460b-82b6-f7336e72a6de,Namespace:calico-system,Attempt:0,}" Jan 24 00:38:41.292606 kubelet[3194]: E0124 00:38:41.292071 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.292606 kubelet[3194]: W0124 00:38:41.292102 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.292606 kubelet[3194]: E0124 00:38:41.292130 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.294723 kubelet[3194]: E0124 00:38:41.294028 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.294723 kubelet[3194]: W0124 00:38:41.294053 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.294723 kubelet[3194]: E0124 00:38:41.294078 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.296011 kubelet[3194]: E0124 00:38:41.295958 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.296011 kubelet[3194]: W0124 00:38:41.295988 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.296011 kubelet[3194]: E0124 00:38:41.296012 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.297697 kubelet[3194]: E0124 00:38:41.297560 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.297697 kubelet[3194]: W0124 00:38:41.297580 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.297697 kubelet[3194]: E0124 00:38:41.297600 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.297948 kubelet[3194]: E0124 00:38:41.297923 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.297948 kubelet[3194]: W0124 00:38:41.297942 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.298074 kubelet[3194]: E0124 00:38:41.297958 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.298196 kubelet[3194]: E0124 00:38:41.298179 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.298196 kubelet[3194]: W0124 00:38:41.298193 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.298196 kubelet[3194]: E0124 00:38:41.298205 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.298936 kubelet[3194]: E0124 00:38:41.298916 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.298936 kubelet[3194]: W0124 00:38:41.298933 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.299083 kubelet[3194]: E0124 00:38:41.298948 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.299229 kubelet[3194]: E0124 00:38:41.299194 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.299229 kubelet[3194]: W0124 00:38:41.299208 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.299229 kubelet[3194]: E0124 00:38:41.299222 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.300034 kubelet[3194]: E0124 00:38:41.299527 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.300034 kubelet[3194]: W0124 00:38:41.299537 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.300034 kubelet[3194]: E0124 00:38:41.299550 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.300034 kubelet[3194]: E0124 00:38:41.299851 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.300034 kubelet[3194]: W0124 00:38:41.299862 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.300034 kubelet[3194]: E0124 00:38:41.299879 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.301505 kubelet[3194]: E0124 00:38:41.301025 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.301505 kubelet[3194]: W0124 00:38:41.301038 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.301505 kubelet[3194]: E0124 00:38:41.301052 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.301966 kubelet[3194]: E0124 00:38:41.301947 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.301966 kubelet[3194]: W0124 00:38:41.301966 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.302092 kubelet[3194]: E0124 00:38:41.301981 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.302403 kubelet[3194]: E0124 00:38:41.302367 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.302403 kubelet[3194]: W0124 00:38:41.302400 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.302569 kubelet[3194]: E0124 00:38:41.302415 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.302810 kubelet[3194]: E0124 00:38:41.302780 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.302810 kubelet[3194]: W0124 00:38:41.302798 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.302927 kubelet[3194]: E0124 00:38:41.302811 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.304395 kubelet[3194]: E0124 00:38:41.303549 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.304395 kubelet[3194]: W0124 00:38:41.303564 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.304395 kubelet[3194]: E0124 00:38:41.303578 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.304395 kubelet[3194]: E0124 00:38:41.304286 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.304395 kubelet[3194]: W0124 00:38:41.304298 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.304395 kubelet[3194]: E0124 00:38:41.304312 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.305103 kubelet[3194]: E0124 00:38:41.305068 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.305103 kubelet[3194]: W0124 00:38:41.305089 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.305103 kubelet[3194]: E0124 00:38:41.305103 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.305978 kubelet[3194]: E0124 00:38:41.305960 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.306067 kubelet[3194]: W0124 00:38:41.305978 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.306128 kubelet[3194]: E0124 00:38:41.306063 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.306893 kubelet[3194]: E0124 00:38:41.306751 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.306893 kubelet[3194]: W0124 00:38:41.306765 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.306893 kubelet[3194]: E0124 00:38:41.306779 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.307764 kubelet[3194]: E0124 00:38:41.307563 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.307764 kubelet[3194]: W0124 00:38:41.307578 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.307764 kubelet[3194]: E0124 00:38:41.307592 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.310741 kubelet[3194]: E0124 00:38:41.310511 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.310741 kubelet[3194]: W0124 00:38:41.310531 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.310741 kubelet[3194]: E0124 00:38:41.310548 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.310741 kubelet[3194]: I0124 00:38:41.310647 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/08028277-ca96-466b-b85d-b33e87d62943-varrun\") pod \"csi-node-driver-g8z2m\" (UID: \"08028277-ca96-466b-b85d-b33e87d62943\") " pod="calico-system/csi-node-driver-g8z2m" Jan 24 00:38:41.312391 kubelet[3194]: E0124 00:38:41.311124 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.312391 kubelet[3194]: W0124 00:38:41.311138 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.312391 kubelet[3194]: E0124 00:38:41.311210 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.312391 kubelet[3194]: E0124 00:38:41.311637 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.312391 kubelet[3194]: W0124 00:38:41.311650 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.312391 kubelet[3194]: E0124 00:38:41.311676 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.312391 kubelet[3194]: E0124 00:38:41.312035 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.312391 kubelet[3194]: W0124 00:38:41.312047 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.312391 kubelet[3194]: E0124 00:38:41.312061 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.312944 kubelet[3194]: I0124 00:38:41.312138 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/08028277-ca96-466b-b85d-b33e87d62943-socket-dir\") pod \"csi-node-driver-g8z2m\" (UID: \"08028277-ca96-466b-b85d-b33e87d62943\") " pod="calico-system/csi-node-driver-g8z2m" Jan 24 00:38:41.312944 kubelet[3194]: E0124 00:38:41.312514 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.312944 kubelet[3194]: W0124 00:38:41.312556 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.312944 kubelet[3194]: E0124 00:38:41.312581 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.312944 kubelet[3194]: I0124 00:38:41.312710 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68lqm\" (UniqueName: \"kubernetes.io/projected/08028277-ca96-466b-b85d-b33e87d62943-kube-api-access-68lqm\") pod \"csi-node-driver-g8z2m\" (UID: \"08028277-ca96-466b-b85d-b33e87d62943\") " pod="calico-system/csi-node-driver-g8z2m" Jan 24 00:38:41.313167 kubelet[3194]: E0124 00:38:41.313151 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.313215 kubelet[3194]: W0124 00:38:41.313169 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.313705 kubelet[3194]: E0124 00:38:41.313678 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.314406 kubelet[3194]: E0124 00:38:41.313979 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.314406 kubelet[3194]: W0124 00:38:41.313992 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.314406 kubelet[3194]: E0124 00:38:41.314075 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.314406 kubelet[3194]: E0124 00:38:41.314285 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.314406 kubelet[3194]: W0124 00:38:41.314295 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.314688 kubelet[3194]: E0124 00:38:41.314526 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.314688 kubelet[3194]: I0124 00:38:41.314557 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/08028277-ca96-466b-b85d-b33e87d62943-kubelet-dir\") pod \"csi-node-driver-g8z2m\" (UID: \"08028277-ca96-466b-b85d-b33e87d62943\") " pod="calico-system/csi-node-driver-g8z2m" Jan 24 00:38:41.316059 kubelet[3194]: E0124 00:38:41.315317 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.316059 kubelet[3194]: W0124 00:38:41.315344 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.316059 kubelet[3194]: E0124 00:38:41.315372 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.316059 kubelet[3194]: E0124 00:38:41.315768 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.316059 kubelet[3194]: W0124 00:38:41.315780 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.316059 kubelet[3194]: E0124 00:38:41.315808 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.316059 kubelet[3194]: I0124 00:38:41.315833 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/08028277-ca96-466b-b85d-b33e87d62943-registration-dir\") pod \"csi-node-driver-g8z2m\" (UID: \"08028277-ca96-466b-b85d-b33e87d62943\") " pod="calico-system/csi-node-driver-g8z2m" Jan 24 00:38:41.317000 kubelet[3194]: E0124 00:38:41.316181 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.317000 kubelet[3194]: W0124 00:38:41.316195 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.317000 kubelet[3194]: E0124 00:38:41.316408 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.317000 kubelet[3194]: E0124 00:38:41.316797 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.317000 kubelet[3194]: W0124 00:38:41.316809 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.317000 kubelet[3194]: E0124 00:38:41.316833 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.318567 kubelet[3194]: E0124 00:38:41.318540 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.318567 kubelet[3194]: W0124 00:38:41.318560 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.318864 kubelet[3194]: E0124 00:38:41.318581 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.320054 kubelet[3194]: E0124 00:38:41.319919 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.320054 kubelet[3194]: W0124 00:38:41.319937 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.320054 kubelet[3194]: E0124 00:38:41.319952 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.320242 kubelet[3194]: E0124 00:38:41.320224 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.320242 kubelet[3194]: W0124 00:38:41.320236 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.320336 kubelet[3194]: E0124 00:38:41.320254 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.337353 containerd[1988]: time="2026-01-24T00:38:41.336994376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:38:41.337353 containerd[1988]: time="2026-01-24T00:38:41.337063001Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:38:41.337353 containerd[1988]: time="2026-01-24T00:38:41.337080205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:38:41.337353 containerd[1988]: time="2026-01-24T00:38:41.337185185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:38:41.386105 containerd[1988]: time="2026-01-24T00:38:41.386058055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zgkks,Uid:c59909eb-a37c-4811-9727-9afe92768ce1,Namespace:calico-system,Attempt:0,}" Jan 24 00:38:41.400797 systemd[1]: Started cri-containerd-df454f5cadda228dbf795a6ce0ef4459b2e47a08cea99e4ee8d7bf27fc0ea6b5.scope - libcontainer container df454f5cadda228dbf795a6ce0ef4459b2e47a08cea99e4ee8d7bf27fc0ea6b5. Jan 24 00:38:41.419518 kubelet[3194]: E0124 00:38:41.419472 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.419721 kubelet[3194]: W0124 00:38:41.419698 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.419865 kubelet[3194]: E0124 00:38:41.419830 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.422302 kubelet[3194]: E0124 00:38:41.422267 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.422604 kubelet[3194]: W0124 00:38:41.422415 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.422604 kubelet[3194]: E0124 00:38:41.422444 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.425748 kubelet[3194]: E0124 00:38:41.425498 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.425748 kubelet[3194]: W0124 00:38:41.425521 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.428510 kubelet[3194]: E0124 00:38:41.427600 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.428510 kubelet[3194]: W0124 00:38:41.427624 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.428510 kubelet[3194]: E0124 00:38:41.427646 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.428510 kubelet[3194]: E0124 00:38:41.427944 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.428510 kubelet[3194]: W0124 00:38:41.427956 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.428510 kubelet[3194]: E0124 00:38:41.427978 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.428510 kubelet[3194]: E0124 00:38:41.428203 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.428510 kubelet[3194]: W0124 00:38:41.428212 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.428510 kubelet[3194]: E0124 00:38:41.428224 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.428510 kubelet[3194]: E0124 00:38:41.428486 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.428995 kubelet[3194]: W0124 00:38:41.428504 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.428995 kubelet[3194]: E0124 00:38:41.428517 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.433488 kubelet[3194]: E0124 00:38:41.430890 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.433488 kubelet[3194]: W0124 00:38:41.430913 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.433488 kubelet[3194]: E0124 00:38:41.430943 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.433488 kubelet[3194]: E0124 00:38:41.431622 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.433488 kubelet[3194]: W0124 00:38:41.431645 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.433488 kubelet[3194]: E0124 00:38:41.431661 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.433488 kubelet[3194]: E0124 00:38:41.431892 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.433488 kubelet[3194]: W0124 00:38:41.431909 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.433488 kubelet[3194]: E0124 00:38:41.431923 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.433488 kubelet[3194]: E0124 00:38:41.432179 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.433990 kubelet[3194]: W0124 00:38:41.432191 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.433990 kubelet[3194]: E0124 00:38:41.432205 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.433990 kubelet[3194]: E0124 00:38:41.432245 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.433990 kubelet[3194]: E0124 00:38:41.432647 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.433990 kubelet[3194]: W0124 00:38:41.432752 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.433990 kubelet[3194]: E0124 00:38:41.432772 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.436517 kubelet[3194]: E0124 00:38:41.434536 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.436517 kubelet[3194]: W0124 00:38:41.434560 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.436517 kubelet[3194]: E0124 00:38:41.434585 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.439719 kubelet[3194]: E0124 00:38:41.438827 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.439719 kubelet[3194]: W0124 00:38:41.438850 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.439719 kubelet[3194]: E0124 00:38:41.438875 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.439719 kubelet[3194]: E0124 00:38:41.439514 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.439719 kubelet[3194]: W0124 00:38:41.439529 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.439719 kubelet[3194]: E0124 00:38:41.439548 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.440075 kubelet[3194]: E0124 00:38:41.440029 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.440075 kubelet[3194]: W0124 00:38:41.440042 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.440075 kubelet[3194]: E0124 00:38:41.440059 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.444636 kubelet[3194]: E0124 00:38:41.440743 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.444636 kubelet[3194]: W0124 00:38:41.440760 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.444636 kubelet[3194]: E0124 00:38:41.440776 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.444636 kubelet[3194]: E0124 00:38:41.441838 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.444636 kubelet[3194]: W0124 00:38:41.441851 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.444636 kubelet[3194]: E0124 00:38:41.441867 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.444636 kubelet[3194]: E0124 00:38:41.442340 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.444636 kubelet[3194]: W0124 00:38:41.442352 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.444636 kubelet[3194]: E0124 00:38:41.442367 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.444636 kubelet[3194]: E0124 00:38:41.442838 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.447533 kubelet[3194]: W0124 00:38:41.442849 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.447533 kubelet[3194]: E0124 00:38:41.442864 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.447533 kubelet[3194]: E0124 00:38:41.443529 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.447533 kubelet[3194]: W0124 00:38:41.443542 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.447533 kubelet[3194]: E0124 00:38:41.443556 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.447533 kubelet[3194]: E0124 00:38:41.444171 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.447533 kubelet[3194]: W0124 00:38:41.444184 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.447533 kubelet[3194]: E0124 00:38:41.444206 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.447533 kubelet[3194]: E0124 00:38:41.444954 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.447533 kubelet[3194]: W0124 00:38:41.444967 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.447958 kubelet[3194]: E0124 00:38:41.445014 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.447958 kubelet[3194]: E0124 00:38:41.445233 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.447958 kubelet[3194]: W0124 00:38:41.445244 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.447958 kubelet[3194]: E0124 00:38:41.445265 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.447958 kubelet[3194]: E0124 00:38:41.445966 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.447958 kubelet[3194]: W0124 00:38:41.445978 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.447958 kubelet[3194]: E0124 00:38:41.445994 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.454303 containerd[1988]: time="2026-01-24T00:38:41.449863532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:38:41.454303 containerd[1988]: time="2026-01-24T00:38:41.451186964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:38:41.454303 containerd[1988]: time="2026-01-24T00:38:41.451208633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:38:41.454303 containerd[1988]: time="2026-01-24T00:38:41.451313438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:38:41.465413 kubelet[3194]: E0124 00:38:41.464491 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:41.465413 kubelet[3194]: W0124 00:38:41.464522 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:41.465413 kubelet[3194]: E0124 00:38:41.464553 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:41.487193 systemd[1]: Started cri-containerd-6a880977a7fd7a89dd09cd5e9d9d9d572ac29fff5f62d5cc37992e8245ebca13.scope - libcontainer container 6a880977a7fd7a89dd09cd5e9d9d9d572ac29fff5f62d5cc37992e8245ebca13. Jan 24 00:38:41.538530 containerd[1988]: time="2026-01-24T00:38:41.536690507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-667849bf7d-8pphh,Uid:2efdcb91-e2ec-460b-82b6-f7336e72a6de,Namespace:calico-system,Attempt:0,} returns sandbox id \"df454f5cadda228dbf795a6ce0ef4459b2e47a08cea99e4ee8d7bf27fc0ea6b5\"" Jan 24 00:38:41.542028 containerd[1988]: time="2026-01-24T00:38:41.540340064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zgkks,Uid:c59909eb-a37c-4811-9727-9afe92768ce1,Namespace:calico-system,Attempt:0,} returns sandbox id \"6a880977a7fd7a89dd09cd5e9d9d9d572ac29fff5f62d5cc37992e8245ebca13\"" Jan 24 00:38:41.543406 containerd[1988]: time="2026-01-24T00:38:41.543353797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 24 00:38:42.833898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3891110776.mount: Deactivated successfully. Jan 24 00:38:43.256624 kubelet[3194]: E0124 00:38:43.255712 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g8z2m" podUID="08028277-ca96-466b-b85d-b33e87d62943" Jan 24 00:38:43.687534 containerd[1988]: time="2026-01-24T00:38:43.687362845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:38:43.688796 containerd[1988]: time="2026-01-24T00:38:43.688748497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 24 00:38:43.690256 containerd[1988]: time="2026-01-24T00:38:43.690198545Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:38:43.692728 containerd[1988]: time="2026-01-24T00:38:43.692672131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:38:43.693955 containerd[1988]: time="2026-01-24T00:38:43.693489824Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.150072276s" Jan 24 00:38:43.693955 containerd[1988]: time="2026-01-24T00:38:43.693524803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 24 00:38:43.694999 containerd[1988]: time="2026-01-24T00:38:43.694974591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 24 00:38:43.713419 containerd[1988]: time="2026-01-24T00:38:43.713360703Z" level=info msg="CreateContainer within sandbox \"df454f5cadda228dbf795a6ce0ef4459b2e47a08cea99e4ee8d7bf27fc0ea6b5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 24 00:38:43.740785 containerd[1988]: time="2026-01-24T00:38:43.740715413Z" level=info msg="CreateContainer within sandbox \"df454f5cadda228dbf795a6ce0ef4459b2e47a08cea99e4ee8d7bf27fc0ea6b5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c244fd23b249b33317ffac8c9f52a61f85074fd77d67a47d4addf667b4af8472\"" Jan 24 00:38:43.741584 containerd[1988]: time="2026-01-24T00:38:43.741546919Z" level=info msg="StartContainer for \"c244fd23b249b33317ffac8c9f52a61f85074fd77d67a47d4addf667b4af8472\"" Jan 24 00:38:43.799644 systemd[1]: Started cri-containerd-c244fd23b249b33317ffac8c9f52a61f85074fd77d67a47d4addf667b4af8472.scope - libcontainer container c244fd23b249b33317ffac8c9f52a61f85074fd77d67a47d4addf667b4af8472. Jan 24 00:38:43.849271 containerd[1988]: time="2026-01-24T00:38:43.849223383Z" level=info msg="StartContainer for \"c244fd23b249b33317ffac8c9f52a61f85074fd77d67a47d4addf667b4af8472\" returns successfully" Jan 24 00:38:44.425369 kubelet[3194]: I0124 00:38:44.425299 3194 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-667849bf7d-8pphh" podStartSLOduration=2.27032862 podStartE2EDuration="4.424188708s" podCreationTimestamp="2026-01-24 00:38:40 +0000 UTC" firstStartedPulling="2026-01-24 00:38:41.540945495 +0000 UTC m=+22.461992274" lastFinishedPulling="2026-01-24 00:38:43.69480558 +0000 UTC m=+24.615852362" observedRunningTime="2026-01-24 00:38:44.423342056 +0000 UTC m=+25.344388845" watchObservedRunningTime="2026-01-24 00:38:44.424188708 +0000 UTC m=+25.345235543" Jan 24 00:38:44.431983 kubelet[3194]: E0124 00:38:44.431942 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.431983 kubelet[3194]: W0124 00:38:44.431973 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.432251 kubelet[3194]: E0124 00:38:44.432004 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.432251 kubelet[3194]: E0124 00:38:44.432235 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.432251 kubelet[3194]: W0124 00:38:44.432244 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.432340 kubelet[3194]: E0124 00:38:44.432254 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.432450 kubelet[3194]: E0124 00:38:44.432436 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.432450 kubelet[3194]: W0124 00:38:44.432447 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.432549 kubelet[3194]: E0124 00:38:44.432455 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.432743 kubelet[3194]: E0124 00:38:44.432702 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.432743 kubelet[3194]: W0124 00:38:44.432725 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.432743 kubelet[3194]: E0124 00:38:44.432741 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.433117 kubelet[3194]: E0124 00:38:44.433088 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.433173 kubelet[3194]: W0124 00:38:44.433101 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.433173 kubelet[3194]: E0124 00:38:44.433138 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.433411 kubelet[3194]: E0124 00:38:44.433400 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.433443 kubelet[3194]: W0124 00:38:44.433413 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.433443 kubelet[3194]: E0124 00:38:44.433426 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.433683 kubelet[3194]: E0124 00:38:44.433633 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.433683 kubelet[3194]: W0124 00:38:44.433657 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.433683 kubelet[3194]: E0124 00:38:44.433671 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.433895 kubelet[3194]: E0124 00:38:44.433860 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.433895 kubelet[3194]: W0124 00:38:44.433867 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.433895 kubelet[3194]: E0124 00:38:44.433876 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.434143 kubelet[3194]: E0124 00:38:44.434128 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.434143 kubelet[3194]: W0124 00:38:44.434140 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.434214 kubelet[3194]: E0124 00:38:44.434149 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.434518 kubelet[3194]: E0124 00:38:44.434366 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.434518 kubelet[3194]: W0124 00:38:44.434402 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.434518 kubelet[3194]: E0124 00:38:44.434420 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.434724 kubelet[3194]: E0124 00:38:44.434704 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.434724 kubelet[3194]: W0124 00:38:44.434717 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.434831 kubelet[3194]: E0124 00:38:44.434728 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.434929 kubelet[3194]: E0124 00:38:44.434916 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.434929 kubelet[3194]: W0124 00:38:44.434926 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.434989 kubelet[3194]: E0124 00:38:44.434934 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.435145 kubelet[3194]: E0124 00:38:44.435130 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.435145 kubelet[3194]: W0124 00:38:44.435141 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.435208 kubelet[3194]: E0124 00:38:44.435150 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.435479 kubelet[3194]: E0124 00:38:44.435457 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.435479 kubelet[3194]: W0124 00:38:44.435473 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.435604 kubelet[3194]: E0124 00:38:44.435485 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.435701 kubelet[3194]: E0124 00:38:44.435686 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.435701 kubelet[3194]: W0124 00:38:44.435704 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.435701 kubelet[3194]: E0124 00:38:44.435714 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.451197 kubelet[3194]: E0124 00:38:44.451161 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.451197 kubelet[3194]: W0124 00:38:44.451188 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.451461 kubelet[3194]: E0124 00:38:44.451212 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.451621 kubelet[3194]: E0124 00:38:44.451554 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.451621 kubelet[3194]: W0124 00:38:44.451567 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.451621 kubelet[3194]: E0124 00:38:44.451601 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.451946 kubelet[3194]: E0124 00:38:44.451923 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.451946 kubelet[3194]: W0124 00:38:44.451942 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.452078 kubelet[3194]: E0124 00:38:44.451966 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.452271 kubelet[3194]: E0124 00:38:44.452251 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.452271 kubelet[3194]: W0124 00:38:44.452267 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.452437 kubelet[3194]: E0124 00:38:44.452287 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.452576 kubelet[3194]: E0124 00:38:44.452549 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.452576 kubelet[3194]: W0124 00:38:44.452564 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.452682 kubelet[3194]: E0124 00:38:44.452580 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.453031 kubelet[3194]: E0124 00:38:44.453008 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.453031 kubelet[3194]: W0124 00:38:44.453026 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.453182 kubelet[3194]: E0124 00:38:44.453158 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.453654 kubelet[3194]: E0124 00:38:44.453635 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.453654 kubelet[3194]: W0124 00:38:44.453650 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.453811 kubelet[3194]: E0124 00:38:44.453788 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.454026 kubelet[3194]: E0124 00:38:44.454007 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.454026 kubelet[3194]: W0124 00:38:44.454021 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.454203 kubelet[3194]: E0124 00:38:44.454088 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.454257 kubelet[3194]: E0124 00:38:44.454247 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.454322 kubelet[3194]: W0124 00:38:44.454257 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.454322 kubelet[3194]: E0124 00:38:44.454279 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.454547 kubelet[3194]: E0124 00:38:44.454531 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.454547 kubelet[3194]: W0124 00:38:44.454545 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.454716 kubelet[3194]: E0124 00:38:44.454564 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.454794 kubelet[3194]: E0124 00:38:44.454781 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.454858 kubelet[3194]: W0124 00:38:44.454794 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.454858 kubelet[3194]: E0124 00:38:44.454814 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.455037 kubelet[3194]: E0124 00:38:44.455020 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.455037 kubelet[3194]: W0124 00:38:44.455034 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.455167 kubelet[3194]: E0124 00:38:44.455052 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.455318 kubelet[3194]: E0124 00:38:44.455302 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.455318 kubelet[3194]: W0124 00:38:44.455316 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.455730 kubelet[3194]: E0124 00:38:44.455494 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.455730 kubelet[3194]: E0124 00:38:44.455548 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.455730 kubelet[3194]: W0124 00:38:44.455557 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.455730 kubelet[3194]: E0124 00:38:44.455570 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.456026 kubelet[3194]: E0124 00:38:44.455838 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.456026 kubelet[3194]: W0124 00:38:44.455850 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.456026 kubelet[3194]: E0124 00:38:44.455868 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.456168 kubelet[3194]: E0124 00:38:44.456107 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.456168 kubelet[3194]: W0124 00:38:44.456117 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.456168 kubelet[3194]: E0124 00:38:44.456129 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.456627 kubelet[3194]: E0124 00:38:44.456608 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.456627 kubelet[3194]: W0124 00:38:44.456622 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.456745 kubelet[3194]: E0124 00:38:44.456642 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.456998 kubelet[3194]: E0124 00:38:44.456981 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:38:44.456998 kubelet[3194]: W0124 00:38:44.456996 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:38:44.457085 kubelet[3194]: E0124 00:38:44.457010 3194 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:38:44.841433 containerd[1988]: time="2026-01-24T00:38:44.841262614Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:38:44.843194 containerd[1988]: time="2026-01-24T00:38:44.842723093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 24 00:38:44.844134 containerd[1988]: time="2026-01-24T00:38:44.844078544Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:38:44.849125 containerd[1988]: time="2026-01-24T00:38:44.849033598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:38:44.850340 containerd[1988]: time="2026-01-24T00:38:44.850181443Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.155037003s" Jan 24 00:38:44.850340 containerd[1988]: time="2026-01-24T00:38:44.850231497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 24 00:38:44.854363 containerd[1988]: time="2026-01-24T00:38:44.854263659Z" level=info msg="CreateContainer within sandbox \"6a880977a7fd7a89dd09cd5e9d9d9d572ac29fff5f62d5cc37992e8245ebca13\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 24 00:38:44.892961 containerd[1988]: time="2026-01-24T00:38:44.892832294Z" level=info msg="CreateContainer within sandbox \"6a880977a7fd7a89dd09cd5e9d9d9d572ac29fff5f62d5cc37992e8245ebca13\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"04546bf1cb58eda204344e051b5c1dbbd74404372f16b65b9d51c95d15352229\"" Jan 24 00:38:44.894615 containerd[1988]: time="2026-01-24T00:38:44.893679849Z" level=info msg="StartContainer for \"04546bf1cb58eda204344e051b5c1dbbd74404372f16b65b9d51c95d15352229\"" Jan 24 00:38:44.941637 systemd[1]: Started cri-containerd-04546bf1cb58eda204344e051b5c1dbbd74404372f16b65b9d51c95d15352229.scope - libcontainer container 04546bf1cb58eda204344e051b5c1dbbd74404372f16b65b9d51c95d15352229. Jan 24 00:38:44.976689 containerd[1988]: time="2026-01-24T00:38:44.976534650Z" level=info msg="StartContainer for \"04546bf1cb58eda204344e051b5c1dbbd74404372f16b65b9d51c95d15352229\" returns successfully" Jan 24 00:38:44.989569 systemd[1]: cri-containerd-04546bf1cb58eda204344e051b5c1dbbd74404372f16b65b9d51c95d15352229.scope: Deactivated successfully. Jan 24 00:38:45.027547 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04546bf1cb58eda204344e051b5c1dbbd74404372f16b65b9d51c95d15352229-rootfs.mount: Deactivated successfully. Jan 24 00:38:45.129775 containerd[1988]: time="2026-01-24T00:38:45.095251477Z" level=info msg="shim disconnected" id=04546bf1cb58eda204344e051b5c1dbbd74404372f16b65b9d51c95d15352229 namespace=k8s.io Jan 24 00:38:45.129775 containerd[1988]: time="2026-01-24T00:38:45.129693320Z" level=warning msg="cleaning up after shim disconnected" id=04546bf1cb58eda204344e051b5c1dbbd74404372f16b65b9d51c95d15352229 namespace=k8s.io Jan 24 00:38:45.129775 containerd[1988]: time="2026-01-24T00:38:45.129712863Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:38:45.149954 containerd[1988]: time="2026-01-24T00:38:45.149898817Z" level=warning msg="cleanup warnings time=\"2026-01-24T00:38:45Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 24 00:38:45.255270 kubelet[3194]: E0124 00:38:45.254916 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g8z2m" podUID="08028277-ca96-466b-b85d-b33e87d62943" Jan 24 00:38:45.415422 kubelet[3194]: I0124 00:38:45.415010 3194 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:38:45.418040 containerd[1988]: time="2026-01-24T00:38:45.417929389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 24 00:38:47.256346 kubelet[3194]: E0124 00:38:47.255939 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g8z2m" podUID="08028277-ca96-466b-b85d-b33e87d62943" Jan 24 00:38:48.377021 containerd[1988]: time="2026-01-24T00:38:48.376622881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:38:48.378236 containerd[1988]: time="2026-01-24T00:38:48.377960805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 24 00:38:48.379717 containerd[1988]: time="2026-01-24T00:38:48.379654497Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:38:48.382615 containerd[1988]: time="2026-01-24T00:38:48.382552831Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:38:48.383625 containerd[1988]: time="2026-01-24T00:38:48.383472670Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.965482755s" Jan 24 00:38:48.383625 containerd[1988]: time="2026-01-24T00:38:48.383514139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 24 00:38:48.387219 containerd[1988]: time="2026-01-24T00:38:48.387160362Z" level=info msg="CreateContainer within sandbox \"6a880977a7fd7a89dd09cd5e9d9d9d572ac29fff5f62d5cc37992e8245ebca13\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 24 00:38:48.406127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount601030344.mount: Deactivated successfully. Jan 24 00:38:48.413304 containerd[1988]: time="2026-01-24T00:38:48.413245469Z" level=info msg="CreateContainer within sandbox \"6a880977a7fd7a89dd09cd5e9d9d9d572ac29fff5f62d5cc37992e8245ebca13\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ba6079de45ea0d37fe6cd86f463e5dd6f54a972a613dd03c98fd3185308157e4\"" Jan 24 00:38:48.413961 containerd[1988]: time="2026-01-24T00:38:48.413928187Z" level=info msg="StartContainer for \"ba6079de45ea0d37fe6cd86f463e5dd6f54a972a613dd03c98fd3185308157e4\"" Jan 24 00:38:48.457622 systemd[1]: Started cri-containerd-ba6079de45ea0d37fe6cd86f463e5dd6f54a972a613dd03c98fd3185308157e4.scope - libcontainer container ba6079de45ea0d37fe6cd86f463e5dd6f54a972a613dd03c98fd3185308157e4. Jan 24 00:38:48.522950 containerd[1988]: time="2026-01-24T00:38:48.522900500Z" level=info msg="StartContainer for \"ba6079de45ea0d37fe6cd86f463e5dd6f54a972a613dd03c98fd3185308157e4\" returns successfully" Jan 24 00:38:49.258891 kubelet[3194]: E0124 00:38:49.258051 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g8z2m" podUID="08028277-ca96-466b-b85d-b33e87d62943" Jan 24 00:38:49.984999 systemd[1]: cri-containerd-ba6079de45ea0d37fe6cd86f463e5dd6f54a972a613dd03c98fd3185308157e4.scope: Deactivated successfully. Jan 24 00:38:50.031063 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba6079de45ea0d37fe6cd86f463e5dd6f54a972a613dd03c98fd3185308157e4-rootfs.mount: Deactivated successfully. Jan 24 00:38:50.078426 containerd[1988]: time="2026-01-24T00:38:50.078236723Z" level=info msg="shim disconnected" id=ba6079de45ea0d37fe6cd86f463e5dd6f54a972a613dd03c98fd3185308157e4 namespace=k8s.io Jan 24 00:38:50.078426 containerd[1988]: time="2026-01-24T00:38:50.078306946Z" level=warning msg="cleaning up after shim disconnected" id=ba6079de45ea0d37fe6cd86f463e5dd6f54a972a613dd03c98fd3185308157e4 namespace=k8s.io Jan 24 00:38:50.078426 containerd[1988]: time="2026-01-24T00:38:50.078315957Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:38:50.113096 kubelet[3194]: I0124 00:38:50.105708 3194 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 24 00:38:50.383218 systemd[1]: Created slice kubepods-besteffort-poddc12b0cb_a033_49df_9736_18c314ed3ccd.slice - libcontainer container kubepods-besteffort-poddc12b0cb_a033_49df_9736_18c314ed3ccd.slice. Jan 24 00:38:50.400210 systemd[1]: Created slice kubepods-besteffort-pode1f50d23_3a90_4692_90b0_6d62e0594e46.slice - libcontainer container kubepods-besteffort-pode1f50d23_3a90_4692_90b0_6d62e0594e46.slice. Jan 24 00:38:50.419669 systemd[1]: Created slice kubepods-burstable-pod1e4ae984_32f1_4342_8042_eb57d3f9ba21.slice - libcontainer container kubepods-burstable-pod1e4ae984_32f1_4342_8042_eb57d3f9ba21.slice. Jan 24 00:38:50.434078 kubelet[3194]: I0124 00:38:50.406070 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29f29deb-ec14-4cf7-a095-b62aa4c4a912-config-volume\") pod \"coredns-668d6bf9bc-h7m4v\" (UID: \"29f29deb-ec14-4cf7-a095-b62aa4c4a912\") " pod="kube-system/coredns-668d6bf9bc-h7m4v" Jan 24 00:38:50.434078 kubelet[3194]: I0124 00:38:50.406134 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/559b3199-5162-436c-ae6f-2ec7000948df-calico-apiserver-certs\") pod \"calico-apiserver-5d8fb494d-tmnz4\" (UID: \"559b3199-5162-436c-ae6f-2ec7000948df\") " pod="calico-apiserver/calico-apiserver-5d8fb494d-tmnz4" Jan 24 00:38:50.434078 kubelet[3194]: I0124 00:38:50.406164 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e4ae984-32f1-4342-8042-eb57d3f9ba21-config-volume\") pod \"coredns-668d6bf9bc-28dmx\" (UID: \"1e4ae984-32f1-4342-8042-eb57d3f9ba21\") " pod="kube-system/coredns-668d6bf9bc-28dmx" Jan 24 00:38:50.434078 kubelet[3194]: I0124 00:38:50.406197 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5rtk\" (UniqueName: \"kubernetes.io/projected/29f29deb-ec14-4cf7-a095-b62aa4c4a912-kube-api-access-v5rtk\") pod \"coredns-668d6bf9bc-h7m4v\" (UID: \"29f29deb-ec14-4cf7-a095-b62aa4c4a912\") " pod="kube-system/coredns-668d6bf9bc-h7m4v" Jan 24 00:38:50.434078 kubelet[3194]: I0124 00:38:50.406225 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92126a9f-72bf-4007-b274-6c7bfe78315a-tigera-ca-bundle\") pod \"calico-kube-controllers-6c5f78b9cf-nf2hx\" (UID: \"92126a9f-72bf-4007-b274-6c7bfe78315a\") " pod="calico-system/calico-kube-controllers-6c5f78b9cf-nf2hx" Jan 24 00:38:50.430094 systemd[1]: Created slice kubepods-besteffort-pod559b3199_5162_436c_ae6f_2ec7000948df.slice - libcontainer container kubepods-besteffort-pod559b3199_5162_436c_ae6f_2ec7000948df.slice. Jan 24 00:38:50.435126 kubelet[3194]: I0124 00:38:50.406263 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1f50d23-3a90-4692-90b0-6d62e0594e46-config\") pod \"goldmane-666569f655-qnfl2\" (UID: \"e1f50d23-3a90-4692-90b0-6d62e0594e46\") " pod="calico-system/goldmane-666569f655-qnfl2" Jan 24 00:38:50.435126 kubelet[3194]: I0124 00:38:50.406293 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whwrx\" (UniqueName: \"kubernetes.io/projected/dc12b0cb-a033-49df-9736-18c314ed3ccd-kube-api-access-whwrx\") pod \"whisker-56d78b5697-k5nq5\" (UID: \"dc12b0cb-a033-49df-9736-18c314ed3ccd\") " pod="calico-system/whisker-56d78b5697-k5nq5" Jan 24 00:38:50.435126 kubelet[3194]: I0124 00:38:50.406320 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e1f50d23-3a90-4692-90b0-6d62e0594e46-goldmane-key-pair\") pod \"goldmane-666569f655-qnfl2\" (UID: \"e1f50d23-3a90-4692-90b0-6d62e0594e46\") " pod="calico-system/goldmane-666569f655-qnfl2" Jan 24 00:38:50.435126 kubelet[3194]: I0124 00:38:50.406350 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6c246b84-9265-4837-8997-3779f5365703-calico-apiserver-certs\") pod \"calico-apiserver-5d8fb494d-phlb5\" (UID: \"6c246b84-9265-4837-8997-3779f5365703\") " pod="calico-apiserver/calico-apiserver-5d8fb494d-phlb5" Jan 24 00:38:50.435126 kubelet[3194]: I0124 00:38:50.406396 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c56f\" (UniqueName: \"kubernetes.io/projected/92126a9f-72bf-4007-b274-6c7bfe78315a-kube-api-access-6c56f\") pod \"calico-kube-controllers-6c5f78b9cf-nf2hx\" (UID: \"92126a9f-72bf-4007-b274-6c7bfe78315a\") " pod="calico-system/calico-kube-controllers-6c5f78b9cf-nf2hx" Jan 24 00:38:50.435337 kubelet[3194]: I0124 00:38:50.406429 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1f50d23-3a90-4692-90b0-6d62e0594e46-goldmane-ca-bundle\") pod \"goldmane-666569f655-qnfl2\" (UID: \"e1f50d23-3a90-4692-90b0-6d62e0594e46\") " pod="calico-system/goldmane-666569f655-qnfl2" Jan 24 00:38:50.435337 kubelet[3194]: I0124 00:38:50.406455 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfz58\" (UniqueName: \"kubernetes.io/projected/559b3199-5162-436c-ae6f-2ec7000948df-kube-api-access-mfz58\") pod \"calico-apiserver-5d8fb494d-tmnz4\" (UID: \"559b3199-5162-436c-ae6f-2ec7000948df\") " pod="calico-apiserver/calico-apiserver-5d8fb494d-tmnz4" Jan 24 00:38:50.435337 kubelet[3194]: I0124 00:38:50.406488 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dc12b0cb-a033-49df-9736-18c314ed3ccd-whisker-backend-key-pair\") pod \"whisker-56d78b5697-k5nq5\" (UID: \"dc12b0cb-a033-49df-9736-18c314ed3ccd\") " pod="calico-system/whisker-56d78b5697-k5nq5" Jan 24 00:38:50.435337 kubelet[3194]: I0124 00:38:50.406527 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc12b0cb-a033-49df-9736-18c314ed3ccd-whisker-ca-bundle\") pod \"whisker-56d78b5697-k5nq5\" (UID: \"dc12b0cb-a033-49df-9736-18c314ed3ccd\") " pod="calico-system/whisker-56d78b5697-k5nq5" Jan 24 00:38:50.435337 kubelet[3194]: I0124 00:38:50.406554 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j48zb\" (UniqueName: \"kubernetes.io/projected/e1f50d23-3a90-4692-90b0-6d62e0594e46-kube-api-access-j48zb\") pod \"goldmane-666569f655-qnfl2\" (UID: \"e1f50d23-3a90-4692-90b0-6d62e0594e46\") " pod="calico-system/goldmane-666569f655-qnfl2" Jan 24 00:38:50.436725 kubelet[3194]: I0124 00:38:50.406581 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68sfm\" (UniqueName: \"kubernetes.io/projected/6c246b84-9265-4837-8997-3779f5365703-kube-api-access-68sfm\") pod \"calico-apiserver-5d8fb494d-phlb5\" (UID: \"6c246b84-9265-4837-8997-3779f5365703\") " pod="calico-apiserver/calico-apiserver-5d8fb494d-phlb5" Jan 24 00:38:50.436725 kubelet[3194]: I0124 00:38:50.406608 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkf6t\" (UniqueName: \"kubernetes.io/projected/1e4ae984-32f1-4342-8042-eb57d3f9ba21-kube-api-access-mkf6t\") pod \"coredns-668d6bf9bc-28dmx\" (UID: \"1e4ae984-32f1-4342-8042-eb57d3f9ba21\") " pod="kube-system/coredns-668d6bf9bc-28dmx" Jan 24 00:38:50.441907 systemd[1]: Created slice kubepods-besteffort-pod92126a9f_72bf_4007_b274_6c7bfe78315a.slice - libcontainer container kubepods-besteffort-pod92126a9f_72bf_4007_b274_6c7bfe78315a.slice. Jan 24 00:38:50.454276 containerd[1988]: time="2026-01-24T00:38:50.454210201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 24 00:38:50.459890 systemd[1]: Created slice kubepods-besteffort-pod6c246b84_9265_4837_8997_3779f5365703.slice - libcontainer container kubepods-besteffort-pod6c246b84_9265_4837_8997_3779f5365703.slice. Jan 24 00:38:50.469965 systemd[1]: Created slice kubepods-burstable-pod29f29deb_ec14_4cf7_a095_b62aa4c4a912.slice - libcontainer container kubepods-burstable-pod29f29deb_ec14_4cf7_a095_b62aa4c4a912.slice. Jan 24 00:38:50.706671 containerd[1988]: time="2026-01-24T00:38:50.706633085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56d78b5697-k5nq5,Uid:dc12b0cb-a033-49df-9736-18c314ed3ccd,Namespace:calico-system,Attempt:0,}" Jan 24 00:38:50.712321 containerd[1988]: time="2026-01-24T00:38:50.712281689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qnfl2,Uid:e1f50d23-3a90-4692-90b0-6d62e0594e46,Namespace:calico-system,Attempt:0,}" Jan 24 00:38:50.731589 containerd[1988]: time="2026-01-24T00:38:50.730247428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-28dmx,Uid:1e4ae984-32f1-4342-8042-eb57d3f9ba21,Namespace:kube-system,Attempt:0,}" Jan 24 00:38:50.737196 containerd[1988]: time="2026-01-24T00:38:50.735290966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d8fb494d-tmnz4,Uid:559b3199-5162-436c-ae6f-2ec7000948df,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:38:50.751625 containerd[1988]: time="2026-01-24T00:38:50.751581797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c5f78b9cf-nf2hx,Uid:92126a9f-72bf-4007-b274-6c7bfe78315a,Namespace:calico-system,Attempt:0,}" Jan 24 00:38:50.765101 containerd[1988]: time="2026-01-24T00:38:50.765034626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d8fb494d-phlb5,Uid:6c246b84-9265-4837-8997-3779f5365703,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:38:50.789697 containerd[1988]: time="2026-01-24T00:38:50.789652325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h7m4v,Uid:29f29deb-ec14-4cf7-a095-b62aa4c4a912,Namespace:kube-system,Attempt:0,}" Jan 24 00:38:51.261238 systemd[1]: Created slice kubepods-besteffort-pod08028277_ca96_466b_b85d_b33e87d62943.slice - libcontainer container kubepods-besteffort-pod08028277_ca96_466b_b85d_b33e87d62943.slice. Jan 24 00:38:51.264418 containerd[1988]: time="2026-01-24T00:38:51.264239551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g8z2m,Uid:08028277-ca96-466b-b85d-b33e87d62943,Namespace:calico-system,Attempt:0,}" Jan 24 00:38:51.813627 containerd[1988]: time="2026-01-24T00:38:51.812082063Z" level=error msg="Failed to destroy network for sandbox \"651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:51.820946 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b-shm.mount: Deactivated successfully. Jan 24 00:38:51.833864 containerd[1988]: time="2026-01-24T00:38:51.812615722Z" level=error msg="Failed to destroy network for sandbox \"6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:51.840894 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315-shm.mount: Deactivated successfully. Jan 24 00:38:51.846981 containerd[1988]: time="2026-01-24T00:38:51.841543918Z" level=error msg="encountered an error cleaning up failed sandbox \"6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:51.846981 containerd[1988]: time="2026-01-24T00:38:51.841622191Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d8fb494d-tmnz4,Uid:559b3199-5162-436c-ae6f-2ec7000948df,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:51.846981 containerd[1988]: time="2026-01-24T00:38:51.841722984Z" level=error msg="Failed to destroy network for sandbox \"2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:51.846981 containerd[1988]: time="2026-01-24T00:38:51.842300277Z" level=error msg="encountered an error cleaning up failed sandbox \"2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:51.846981 containerd[1988]: time="2026-01-24T00:38:51.842360402Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-28dmx,Uid:1e4ae984-32f1-4342-8042-eb57d3f9ba21,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:51.846981 containerd[1988]: time="2026-01-24T00:38:51.843917349Z" level=error msg="encountered an error cleaning up failed sandbox \"651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:51.846981 containerd[1988]: time="2026-01-24T00:38:51.843964065Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g8z2m,Uid:08028277-ca96-466b-b85d-b33e87d62943,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:51.849414 containerd[1988]: time="2026-01-24T00:38:51.838199508Z" level=error msg="Failed to destroy network for sandbox \"23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:51.849918 containerd[1988]: time="2026-01-24T00:38:51.849884277Z" level=error msg="encountered an error cleaning up failed sandbox \"23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:51.850054 containerd[1988]: time="2026-01-24T00:38:51.850028008Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c5f78b9cf-nf2hx,Uid:92126a9f-72bf-4007-b274-6c7bfe78315a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:51.850335 containerd[1988]: time="2026-01-24T00:38:51.850311456Z" level=error msg="Failed to destroy network for sandbox \"04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:51.851077 containerd[1988]: time="2026-01-24T00:38:51.851012673Z" level=error msg="encountered an error cleaning up failed sandbox \"04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:51.851245 containerd[1988]: time="2026-01-24T00:38:51.851188446Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56d78b5697-k5nq5,Uid:dc12b0cb-a033-49df-9736-18c314ed3ccd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:51.851823 containerd[1988]: time="2026-01-24T00:38:51.851666304Z" level=error msg="Failed to destroy network for sandbox \"91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:51.852941 containerd[1988]: time="2026-01-24T00:38:51.852904626Z" level=error msg="encountered an error cleaning up failed sandbox \"91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:51.853143 containerd[1988]: time="2026-01-24T00:38:51.853111651Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d8fb494d-phlb5,Uid:6c246b84-9265-4837-8997-3779f5365703,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:51.882560 containerd[1988]: time="2026-01-24T00:38:51.853319878Z" level=error msg="Failed to destroy network for sandbox \"dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:51.882560 containerd[1988]: time="2026-01-24T00:38:51.853659410Z" level=error msg="encountered an error cleaning up failed sandbox \"dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:51.882560 containerd[1988]: time="2026-01-24T00:38:51.853704593Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h7m4v,Uid:29f29deb-ec14-4cf7-a095-b62aa4c4a912,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:51.882560 containerd[1988]: time="2026-01-24T00:38:51.854109528Z" level=error msg="Failed to destroy network for sandbox \"1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:51.882560 containerd[1988]: time="2026-01-24T00:38:51.854480489Z" level=error msg="encountered an error cleaning up failed sandbox \"1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:51.882560 containerd[1988]: time="2026-01-24T00:38:51.854565188Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qnfl2,Uid:e1f50d23-3a90-4692-90b0-6d62e0594e46,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:51.882898 kubelet[3194]: E0124 00:38:51.861224 3194 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:51.882898 kubelet[3194]: E0124 00:38:51.866962 3194 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:51.882898 kubelet[3194]: E0124 00:38:51.867311 3194 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:51.882898 kubelet[3194]: E0124 00:38:51.868232 3194 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:51.883467 kubelet[3194]: E0124 00:38:51.868274 3194 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-28dmx" Jan 24 00:38:51.883467 kubelet[3194]: E0124 00:38:51.868304 3194 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-28dmx" Jan 24 00:38:51.883467 kubelet[3194]: E0124 00:38:51.868423 3194 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-qnfl2" Jan 24 00:38:51.883467 kubelet[3194]: E0124 00:38:51.868453 3194 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-qnfl2" Jan 24 00:38:51.883664 kubelet[3194]: E0124 00:38:51.868491 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-qnfl2_calico-system(e1f50d23-3a90-4692-90b0-6d62e0594e46)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-qnfl2_calico-system(e1f50d23-3a90-4692-90b0-6d62e0594e46)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-qnfl2" podUID="e1f50d23-3a90-4692-90b0-6d62e0594e46" Jan 24 00:38:51.883664 kubelet[3194]: E0124 00:38:51.868525 3194 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d8fb494d-tmnz4" Jan 24 00:38:51.883664 kubelet[3194]: E0124 00:38:51.868546 3194 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d8fb494d-tmnz4" Jan 24 00:38:51.883841 kubelet[3194]: E0124 00:38:51.868581 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d8fb494d-tmnz4_calico-apiserver(559b3199-5162-436c-ae6f-2ec7000948df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d8fb494d-tmnz4_calico-apiserver(559b3199-5162-436c-ae6f-2ec7000948df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d8fb494d-tmnz4" podUID="559b3199-5162-436c-ae6f-2ec7000948df" Jan 24 00:38:51.883841 kubelet[3194]: E0124 00:38:51.868603 3194 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-h7m4v" Jan 24 00:38:51.883841 kubelet[3194]: E0124 00:38:51.868623 3194 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-h7m4v" Jan 24 00:38:51.884016 kubelet[3194]: E0124 00:38:51.868654 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-h7m4v_kube-system(29f29deb-ec14-4cf7-a095-b62aa4c4a912)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-h7m4v_kube-system(29f29deb-ec14-4cf7-a095-b62aa4c4a912)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-h7m4v" podUID="29f29deb-ec14-4cf7-a095-b62aa4c4a912" Jan 24 00:38:51.884016 kubelet[3194]: E0124 00:38:51.868693 3194 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:51.884016 kubelet[3194]: E0124 00:38:51.868717 3194 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c5f78b9cf-nf2hx" Jan 24 00:38:51.884243 kubelet[3194]: E0124 00:38:51.868738 3194 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c5f78b9cf-nf2hx" Jan 24 00:38:51.884243 kubelet[3194]: E0124 00:38:51.868771 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c5f78b9cf-nf2hx_calico-system(92126a9f-72bf-4007-b274-6c7bfe78315a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c5f78b9cf-nf2hx_calico-system(92126a9f-72bf-4007-b274-6c7bfe78315a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c5f78b9cf-nf2hx" podUID="92126a9f-72bf-4007-b274-6c7bfe78315a" Jan 24 00:38:51.884243 kubelet[3194]: E0124 00:38:51.868868 3194 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:51.884420 kubelet[3194]: E0124 00:38:51.868894 3194 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g8z2m" Jan 24 00:38:51.884420 kubelet[3194]: E0124 00:38:51.868920 3194 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g8z2m" Jan 24 00:38:51.884420 kubelet[3194]: E0124 00:38:51.868951 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-g8z2m_calico-system(08028277-ca96-466b-b85d-b33e87d62943)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-g8z2m_calico-system(08028277-ca96-466b-b85d-b33e87d62943)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g8z2m" podUID="08028277-ca96-466b-b85d-b33e87d62943" Jan 24 00:38:51.884595 kubelet[3194]: E0124 00:38:51.868983 3194 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:51.884595 kubelet[3194]: E0124 00:38:51.869004 3194 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56d78b5697-k5nq5" Jan 24 00:38:51.884595 kubelet[3194]: E0124 00:38:51.869024 3194 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56d78b5697-k5nq5" Jan 24 00:38:51.884780 kubelet[3194]: E0124 00:38:51.869057 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-56d78b5697-k5nq5_calico-system(dc12b0cb-a033-49df-9736-18c314ed3ccd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-56d78b5697-k5nq5_calico-system(dc12b0cb-a033-49df-9736-18c314ed3ccd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-56d78b5697-k5nq5" podUID="dc12b0cb-a033-49df-9736-18c314ed3ccd" Jan 24 00:38:51.884780 kubelet[3194]: E0124 00:38:51.869088 3194 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:51.884780 kubelet[3194]: E0124 00:38:51.869124 3194 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d8fb494d-phlb5" Jan 24 00:38:51.885033 kubelet[3194]: E0124 00:38:51.869142 3194 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d8fb494d-phlb5" Jan 24 00:38:51.885033 kubelet[3194]: E0124 00:38:51.869199 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d8fb494d-phlb5_calico-apiserver(6c246b84-9265-4837-8997-3779f5365703)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d8fb494d-phlb5_calico-apiserver(6c246b84-9265-4837-8997-3779f5365703)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d8fb494d-phlb5" podUID="6c246b84-9265-4837-8997-3779f5365703" Jan 24 00:38:51.885033 kubelet[3194]: E0124 00:38:51.868362 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-28dmx_kube-system(1e4ae984-32f1-4342-8042-eb57d3f9ba21)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-28dmx_kube-system(1e4ae984-32f1-4342-8042-eb57d3f9ba21)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-28dmx" podUID="1e4ae984-32f1-4342-8042-eb57d3f9ba21" Jan 24 00:38:52.032506 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e-shm.mount: Deactivated successfully. Jan 24 00:38:52.032904 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233-shm.mount: Deactivated successfully. Jan 24 00:38:52.033259 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822-shm.mount: Deactivated successfully. Jan 24 00:38:52.033505 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b-shm.mount: Deactivated successfully. Jan 24 00:38:52.033596 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17-shm.mount: Deactivated successfully. Jan 24 00:38:52.033677 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6-shm.mount: Deactivated successfully. Jan 24 00:38:52.471006 kubelet[3194]: I0124 00:38:52.470953 3194 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" Jan 24 00:38:52.476163 kubelet[3194]: I0124 00:38:52.475616 3194 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" Jan 24 00:38:52.511090 kubelet[3194]: I0124 00:38:52.511053 3194 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" Jan 24 00:38:52.521314 kubelet[3194]: I0124 00:38:52.519314 3194 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" Jan 24 00:38:52.538912 kubelet[3194]: I0124 00:38:52.538870 3194 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" Jan 24 00:38:52.553680 kubelet[3194]: I0124 00:38:52.553651 3194 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" Jan 24 00:38:52.557065 kubelet[3194]: I0124 00:38:52.557039 3194 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" Jan 24 00:38:52.568597 kubelet[3194]: I0124 00:38:52.568567 3194 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" Jan 24 00:38:52.597646 containerd[1988]: time="2026-01-24T00:38:52.597586178Z" level=info msg="StopPodSandbox for \"1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17\"" Jan 24 00:38:52.600774 containerd[1988]: time="2026-01-24T00:38:52.600719439Z" level=info msg="Ensure that sandbox 1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17 in task-service has been cleanup successfully" Jan 24 00:38:52.613485 containerd[1988]: time="2026-01-24T00:38:52.613441042Z" level=info msg="StopPodSandbox for \"2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b\"" Jan 24 00:38:52.613863 containerd[1988]: time="2026-01-24T00:38:52.613839750Z" level=info msg="Ensure that sandbox 2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b in task-service has been cleanup successfully" Jan 24 00:38:52.615619 containerd[1988]: time="2026-01-24T00:38:52.615582644Z" level=info msg="StopPodSandbox for \"651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b\"" Jan 24 00:38:52.616056 containerd[1988]: time="2026-01-24T00:38:52.616009101Z" level=info msg="Ensure that sandbox 651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b in task-service has been cleanup successfully" Jan 24 00:38:52.616953 containerd[1988]: time="2026-01-24T00:38:52.616922207Z" level=info msg="StopPodSandbox for \"04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6\"" Jan 24 00:38:52.617245 containerd[1988]: time="2026-01-24T00:38:52.617220466Z" level=info msg="Ensure that sandbox 04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6 in task-service has been cleanup successfully" Jan 24 00:38:52.621271 containerd[1988]: time="2026-01-24T00:38:52.617436056Z" level=info msg="StopPodSandbox for \"dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e\"" Jan 24 00:38:52.621927 containerd[1988]: time="2026-01-24T00:38:52.621895603Z" level=info msg="Ensure that sandbox dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e in task-service has been cleanup successfully" Jan 24 00:38:52.625619 containerd[1988]: time="2026-01-24T00:38:52.617542821Z" level=info msg="StopPodSandbox for \"6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315\"" Jan 24 00:38:52.626038 containerd[1988]: time="2026-01-24T00:38:52.626009298Z" level=info msg="Ensure that sandbox 6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315 in task-service has been cleanup successfully" Jan 24 00:38:52.635446 containerd[1988]: time="2026-01-24T00:38:52.617477705Z" level=info msg="StopPodSandbox for \"91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233\"" Jan 24 00:38:52.636995 containerd[1988]: time="2026-01-24T00:38:52.636951237Z" level=info msg="Ensure that sandbox 91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233 in task-service has been cleanup successfully" Jan 24 00:38:52.638717 containerd[1988]: time="2026-01-24T00:38:52.617509252Z" level=info msg="StopPodSandbox for \"23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822\"" Jan 24 00:38:52.639128 containerd[1988]: time="2026-01-24T00:38:52.638955246Z" level=info msg="Ensure that sandbox 23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822 in task-service has been cleanup successfully" Jan 24 00:38:52.765807 containerd[1988]: time="2026-01-24T00:38:52.765651301Z" level=error msg="StopPodSandbox for \"2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b\" failed" error="failed to destroy network for sandbox \"2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:52.766578 kubelet[3194]: E0124 00:38:52.766437 3194 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" Jan 24 00:38:52.778687 kubelet[3194]: E0124 00:38:52.766523 3194 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b"} Jan 24 00:38:52.778863 kubelet[3194]: E0124 00:38:52.778723 3194 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1e4ae984-32f1-4342-8042-eb57d3f9ba21\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:38:52.778863 kubelet[3194]: E0124 00:38:52.778773 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1e4ae984-32f1-4342-8042-eb57d3f9ba21\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-28dmx" podUID="1e4ae984-32f1-4342-8042-eb57d3f9ba21" Jan 24 00:38:52.801970 containerd[1988]: time="2026-01-24T00:38:52.801697378Z" level=error msg="StopPodSandbox for \"1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17\" failed" error="failed to destroy network for sandbox \"1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:52.802743 kubelet[3194]: E0124 00:38:52.801957 3194 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" Jan 24 00:38:52.802743 kubelet[3194]: E0124 00:38:52.802010 3194 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17"} Jan 24 00:38:52.802743 kubelet[3194]: E0124 00:38:52.802054 3194 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e1f50d23-3a90-4692-90b0-6d62e0594e46\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:38:52.802743 kubelet[3194]: E0124 00:38:52.802100 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e1f50d23-3a90-4692-90b0-6d62e0594e46\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-qnfl2" podUID="e1f50d23-3a90-4692-90b0-6d62e0594e46" Jan 24 00:38:52.815191 containerd[1988]: time="2026-01-24T00:38:52.815138722Z" level=error msg="StopPodSandbox for \"91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233\" failed" error="failed to destroy network for sandbox \"91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:52.815633 kubelet[3194]: E0124 00:38:52.815566 3194 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" Jan 24 00:38:52.815768 kubelet[3194]: E0124 00:38:52.815635 3194 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233"} Jan 24 00:38:52.815768 kubelet[3194]: E0124 00:38:52.815677 3194 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6c246b84-9265-4837-8997-3779f5365703\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:38:52.815768 kubelet[3194]: E0124 00:38:52.815728 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6c246b84-9265-4837-8997-3779f5365703\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d8fb494d-phlb5" podUID="6c246b84-9265-4837-8997-3779f5365703" Jan 24 00:38:52.821881 containerd[1988]: time="2026-01-24T00:38:52.821827524Z" level=error msg="StopPodSandbox for \"651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b\" failed" error="failed to destroy network for sandbox \"651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:52.822315 kubelet[3194]: E0124 00:38:52.822252 3194 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" Jan 24 00:38:52.822542 kubelet[3194]: E0124 00:38:52.822314 3194 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b"} Jan 24 00:38:52.822542 kubelet[3194]: E0124 00:38:52.822361 3194 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"08028277-ca96-466b-b85d-b33e87d62943\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:38:52.822542 kubelet[3194]: E0124 00:38:52.822433 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"08028277-ca96-466b-b85d-b33e87d62943\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g8z2m" podUID="08028277-ca96-466b-b85d-b33e87d62943" Jan 24 00:38:52.824194 containerd[1988]: time="2026-01-24T00:38:52.823976206Z" level=error msg="StopPodSandbox for \"04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6\" failed" error="failed to destroy network for sandbox \"04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:52.824422 kubelet[3194]: E0124 00:38:52.824205 3194 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" Jan 24 00:38:52.824422 kubelet[3194]: E0124 00:38:52.824255 3194 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6"} Jan 24 00:38:52.824422 kubelet[3194]: E0124 00:38:52.824294 3194 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dc12b0cb-a033-49df-9736-18c314ed3ccd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:38:52.824422 kubelet[3194]: E0124 00:38:52.824326 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dc12b0cb-a033-49df-9736-18c314ed3ccd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-56d78b5697-k5nq5" podUID="dc12b0cb-a033-49df-9736-18c314ed3ccd" Jan 24 00:38:52.843684 containerd[1988]: time="2026-01-24T00:38:52.843312817Z" level=error msg="StopPodSandbox for \"23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822\" failed" error="failed to destroy network for sandbox \"23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:52.843926 containerd[1988]: time="2026-01-24T00:38:52.843884174Z" level=error msg="StopPodSandbox for \"6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315\" failed" error="failed to destroy network for sandbox \"6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:52.844706 kubelet[3194]: E0124 00:38:52.844658 3194 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" Jan 24 00:38:52.844900 kubelet[3194]: E0124 00:38:52.844724 3194 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822"} Jan 24 00:38:52.844900 kubelet[3194]: E0124 00:38:52.844771 3194 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"92126a9f-72bf-4007-b274-6c7bfe78315a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:38:52.844900 kubelet[3194]: E0124 00:38:52.844814 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"92126a9f-72bf-4007-b274-6c7bfe78315a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c5f78b9cf-nf2hx" podUID="92126a9f-72bf-4007-b274-6c7bfe78315a" Jan 24 00:38:52.844900 kubelet[3194]: E0124 00:38:52.844658 3194 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" Jan 24 00:38:52.844900 kubelet[3194]: E0124 00:38:52.844856 3194 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315"} Jan 24 00:38:52.845234 kubelet[3194]: E0124 00:38:52.844884 3194 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"559b3199-5162-436c-ae6f-2ec7000948df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:38:52.845234 kubelet[3194]: E0124 00:38:52.844909 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"559b3199-5162-436c-ae6f-2ec7000948df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d8fb494d-tmnz4" podUID="559b3199-5162-436c-ae6f-2ec7000948df" Jan 24 00:38:52.849885 containerd[1988]: time="2026-01-24T00:38:52.849834607Z" level=error msg="StopPodSandbox for \"dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e\" failed" error="failed to destroy network for sandbox \"dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:38:52.850122 kubelet[3194]: E0124 00:38:52.850080 3194 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" Jan 24 00:38:52.850240 kubelet[3194]: E0124 00:38:52.850187 3194 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e"} Jan 24 00:38:52.850309 kubelet[3194]: E0124 00:38:52.850234 3194 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"29f29deb-ec14-4cf7-a095-b62aa4c4a912\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:38:52.850309 kubelet[3194]: E0124 00:38:52.850279 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"29f29deb-ec14-4cf7-a095-b62aa4c4a912\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-h7m4v" podUID="29f29deb-ec14-4cf7-a095-b62aa4c4a912" Jan 24 00:38:56.948972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1794974367.mount: Deactivated successfully. Jan 24 00:38:57.060535 containerd[1988]: time="2026-01-24T00:38:57.060430644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 24 00:38:57.079585 containerd[1988]: time="2026-01-24T00:38:57.079526479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:38:57.155337 containerd[1988]: time="2026-01-24T00:38:57.155291540Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:38:57.158334 containerd[1988]: time="2026-01-24T00:38:57.158271140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:38:57.162893 containerd[1988]: time="2026-01-24T00:38:57.161962070Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.704535117s" Jan 24 00:38:57.162893 containerd[1988]: time="2026-01-24T00:38:57.162014720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 24 00:38:57.191867 containerd[1988]: time="2026-01-24T00:38:57.191809970Z" level=info msg="CreateContainer within sandbox \"6a880977a7fd7a89dd09cd5e9d9d9d572ac29fff5f62d5cc37992e8245ebca13\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 24 00:38:57.294327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1635507887.mount: Deactivated successfully. Jan 24 00:38:57.326340 containerd[1988]: time="2026-01-24T00:38:57.309430770Z" level=info msg="CreateContainer within sandbox \"6a880977a7fd7a89dd09cd5e9d9d9d572ac29fff5f62d5cc37992e8245ebca13\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"667fba57270061577d6af21dabf31d62407a7d142686faac2ebdd10f110b9a82\"" Jan 24 00:38:57.326340 containerd[1988]: time="2026-01-24T00:38:57.310032449Z" level=info msg="StartContainer for \"667fba57270061577d6af21dabf31d62407a7d142686faac2ebdd10f110b9a82\"" Jan 24 00:38:57.453771 systemd[1]: Started cri-containerd-667fba57270061577d6af21dabf31d62407a7d142686faac2ebdd10f110b9a82.scope - libcontainer container 667fba57270061577d6af21dabf31d62407a7d142686faac2ebdd10f110b9a82. Jan 24 00:38:57.503452 containerd[1988]: time="2026-01-24T00:38:57.503043793Z" level=info msg="StartContainer for \"667fba57270061577d6af21dabf31d62407a7d142686faac2ebdd10f110b9a82\" returns successfully" Jan 24 00:38:57.643572 kubelet[3194]: I0124 00:38:57.639906 3194 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zgkks" podStartSLOduration=1.020527824 podStartE2EDuration="16.630453064s" podCreationTimestamp="2026-01-24 00:38:41 +0000 UTC" firstStartedPulling="2026-01-24 00:38:41.553866365 +0000 UTC m=+22.474913141" lastFinishedPulling="2026-01-24 00:38:57.163791613 +0000 UTC m=+38.084838381" observedRunningTime="2026-01-24 00:38:57.628284495 +0000 UTC m=+38.549331310" watchObservedRunningTime="2026-01-24 00:38:57.630453064 +0000 UTC m=+38.551499886" Jan 24 00:38:57.826446 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 24 00:38:57.828057 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 24 00:38:58.609684 kubelet[3194]: I0124 00:38:58.609642 3194 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:38:59.703975 containerd[1988]: time="2026-01-24T00:38:59.703930049Z" level=info msg="StopPodSandbox for \"04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6\"" Jan 24 00:39:00.314337 containerd[1988]: 2026-01-24 00:38:59.836 [INFO][4683] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" Jan 24 00:39:00.314337 containerd[1988]: 2026-01-24 00:38:59.837 [INFO][4683] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" iface="eth0" netns="/var/run/netns/cni-54ebc415-7b9a-765a-1021-0262f92aec58" Jan 24 00:39:00.314337 containerd[1988]: 2026-01-24 00:38:59.838 [INFO][4683] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" iface="eth0" netns="/var/run/netns/cni-54ebc415-7b9a-765a-1021-0262f92aec58" Jan 24 00:39:00.314337 containerd[1988]: 2026-01-24 00:38:59.839 [INFO][4683] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" iface="eth0" netns="/var/run/netns/cni-54ebc415-7b9a-765a-1021-0262f92aec58" Jan 24 00:39:00.314337 containerd[1988]: 2026-01-24 00:38:59.839 [INFO][4683] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" Jan 24 00:39:00.314337 containerd[1988]: 2026-01-24 00:38:59.839 [INFO][4683] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" Jan 24 00:39:00.314337 containerd[1988]: 2026-01-24 00:39:00.278 [INFO][4690] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" HandleID="k8s-pod-network.04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" Workload="ip--172--31--23--37-k8s-whisker--56d78b5697--k5nq5-eth0" Jan 24 00:39:00.314337 containerd[1988]: 2026-01-24 00:39:00.285 [INFO][4690] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:39:00.314337 containerd[1988]: 2026-01-24 00:39:00.286 [INFO][4690] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:39:00.314337 containerd[1988]: 2026-01-24 00:39:00.305 [WARNING][4690] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" HandleID="k8s-pod-network.04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" Workload="ip--172--31--23--37-k8s-whisker--56d78b5697--k5nq5-eth0" Jan 24 00:39:00.314337 containerd[1988]: 2026-01-24 00:39:00.305 [INFO][4690] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" HandleID="k8s-pod-network.04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" Workload="ip--172--31--23--37-k8s-whisker--56d78b5697--k5nq5-eth0" Jan 24 00:39:00.314337 containerd[1988]: 2026-01-24 00:39:00.308 [INFO][4690] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:39:00.314337 containerd[1988]: 2026-01-24 00:39:00.311 [INFO][4683] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" Jan 24 00:39:00.321746 containerd[1988]: time="2026-01-24T00:39:00.315883685Z" level=info msg="TearDown network for sandbox \"04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6\" successfully" Jan 24 00:39:00.321746 containerd[1988]: time="2026-01-24T00:39:00.315922714Z" level=info msg="StopPodSandbox for \"04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6\" returns successfully" Jan 24 00:39:00.320178 systemd[1]: run-netns-cni\x2d54ebc415\x2d7b9a\x2d765a\x2d1021\x2d0262f92aec58.mount: Deactivated successfully. Jan 24 00:39:00.484700 kubelet[3194]: I0124 00:39:00.484652 3194 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whwrx\" (UniqueName: \"kubernetes.io/projected/dc12b0cb-a033-49df-9736-18c314ed3ccd-kube-api-access-whwrx\") pod \"dc12b0cb-a033-49df-9736-18c314ed3ccd\" (UID: \"dc12b0cb-a033-49df-9736-18c314ed3ccd\") " Jan 24 00:39:00.485210 kubelet[3194]: I0124 00:39:00.484933 3194 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dc12b0cb-a033-49df-9736-18c314ed3ccd-whisker-backend-key-pair\") pod \"dc12b0cb-a033-49df-9736-18c314ed3ccd\" (UID: \"dc12b0cb-a033-49df-9736-18c314ed3ccd\") " Jan 24 00:39:00.485210 kubelet[3194]: I0124 00:39:00.484968 3194 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc12b0cb-a033-49df-9736-18c314ed3ccd-whisker-ca-bundle\") pod \"dc12b0cb-a033-49df-9736-18c314ed3ccd\" (UID: \"dc12b0cb-a033-49df-9736-18c314ed3ccd\") " Jan 24 00:39:00.487719 kubelet[3194]: I0124 00:39:00.485320 3194 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc12b0cb-a033-49df-9736-18c314ed3ccd-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "dc12b0cb-a033-49df-9736-18c314ed3ccd" (UID: "dc12b0cb-a033-49df-9736-18c314ed3ccd"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 00:39:00.503472 kubelet[3194]: I0124 00:39:00.503032 3194 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc12b0cb-a033-49df-9736-18c314ed3ccd-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "dc12b0cb-a033-49df-9736-18c314ed3ccd" (UID: "dc12b0cb-a033-49df-9736-18c314ed3ccd"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 00:39:00.503785 kubelet[3194]: I0124 00:39:00.503753 3194 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc12b0cb-a033-49df-9736-18c314ed3ccd-kube-api-access-whwrx" (OuterVolumeSpecName: "kube-api-access-whwrx") pod "dc12b0cb-a033-49df-9736-18c314ed3ccd" (UID: "dc12b0cb-a033-49df-9736-18c314ed3ccd"). InnerVolumeSpecName "kube-api-access-whwrx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:39:00.505041 systemd[1]: var-lib-kubelet-pods-dc12b0cb\x2da033\x2d49df\x2d9736\x2d18c314ed3ccd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwhwrx.mount: Deactivated successfully. Jan 24 00:39:00.505156 systemd[1]: var-lib-kubelet-pods-dc12b0cb\x2da033\x2d49df\x2d9736\x2d18c314ed3ccd-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 24 00:39:00.585605 kubelet[3194]: I0124 00:39:00.585368 3194 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc12b0cb-a033-49df-9736-18c314ed3ccd-whisker-ca-bundle\") on node \"ip-172-31-23-37\" DevicePath \"\"" Jan 24 00:39:00.585605 kubelet[3194]: I0124 00:39:00.585441 3194 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dc12b0cb-a033-49df-9736-18c314ed3ccd-whisker-backend-key-pair\") on node \"ip-172-31-23-37\" DevicePath \"\"" Jan 24 00:39:00.585605 kubelet[3194]: I0124 00:39:00.585453 3194 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-whwrx\" (UniqueName: \"kubernetes.io/projected/dc12b0cb-a033-49df-9736-18c314ed3ccd-kube-api-access-whwrx\") on node \"ip-172-31-23-37\" DevicePath \"\"" Jan 24 00:39:00.620084 systemd[1]: Removed slice kubepods-besteffort-poddc12b0cb_a033_49df_9736_18c314ed3ccd.slice - libcontainer container kubepods-besteffort-poddc12b0cb_a033_49df_9736_18c314ed3ccd.slice. Jan 24 00:39:00.854718 systemd[1]: Created slice kubepods-besteffort-pode59d88b0_80a3_4d3a_8f96_ae389146720c.slice - libcontainer container kubepods-besteffort-pode59d88b0_80a3_4d3a_8f96_ae389146720c.slice. Jan 24 00:39:00.887800 kubelet[3194]: I0124 00:39:00.887729 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e59d88b0-80a3-4d3a-8f96-ae389146720c-whisker-ca-bundle\") pod \"whisker-764954c6fc-ns4t5\" (UID: \"e59d88b0-80a3-4d3a-8f96-ae389146720c\") " pod="calico-system/whisker-764954c6fc-ns4t5" Jan 24 00:39:00.887968 kubelet[3194]: I0124 00:39:00.887821 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e59d88b0-80a3-4d3a-8f96-ae389146720c-whisker-backend-key-pair\") pod \"whisker-764954c6fc-ns4t5\" (UID: \"e59d88b0-80a3-4d3a-8f96-ae389146720c\") " pod="calico-system/whisker-764954c6fc-ns4t5" Jan 24 00:39:00.887968 kubelet[3194]: I0124 00:39:00.887840 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klmv5\" (UniqueName: \"kubernetes.io/projected/e59d88b0-80a3-4d3a-8f96-ae389146720c-kube-api-access-klmv5\") pod \"whisker-764954c6fc-ns4t5\" (UID: \"e59d88b0-80a3-4d3a-8f96-ae389146720c\") " pod="calico-system/whisker-764954c6fc-ns4t5" Jan 24 00:39:01.158477 containerd[1988]: time="2026-01-24T00:39:01.158420104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-764954c6fc-ns4t5,Uid:e59d88b0-80a3-4d3a-8f96-ae389146720c,Namespace:calico-system,Attempt:0,}" Jan 24 00:39:01.258451 kubelet[3194]: I0124 00:39:01.258362 3194 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc12b0cb-a033-49df-9736-18c314ed3ccd" path="/var/lib/kubelet/pods/dc12b0cb-a033-49df-9736-18c314ed3ccd/volumes" Jan 24 00:39:01.402188 systemd-networkd[1900]: cali72282eb4efe: Link UP Jan 24 00:39:01.402267 (udev-worker)[4741]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:39:01.402957 systemd-networkd[1900]: cali72282eb4efe: Gained carrier Jan 24 00:39:01.439563 containerd[1988]: 2026-01-24 00:39:01.234 [INFO][4720] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:39:01.439563 containerd[1988]: 2026-01-24 00:39:01.246 [INFO][4720] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--37-k8s-whisker--764954c6fc--ns4t5-eth0 whisker-764954c6fc- calico-system e59d88b0-80a3-4d3a-8f96-ae389146720c 909 0 2026-01-24 00:39:00 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:764954c6fc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-23-37 whisker-764954c6fc-ns4t5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali72282eb4efe [] [] }} ContainerID="50f52f4892d650aeefb72b4722909888ffcc1126a23b4bf351a76416ac7eb0b5" Namespace="calico-system" Pod="whisker-764954c6fc-ns4t5" WorkloadEndpoint="ip--172--31--23--37-k8s-whisker--764954c6fc--ns4t5-" Jan 24 00:39:01.439563 containerd[1988]: 2026-01-24 00:39:01.247 [INFO][4720] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="50f52f4892d650aeefb72b4722909888ffcc1126a23b4bf351a76416ac7eb0b5" Namespace="calico-system" Pod="whisker-764954c6fc-ns4t5" WorkloadEndpoint="ip--172--31--23--37-k8s-whisker--764954c6fc--ns4t5-eth0" Jan 24 00:39:01.439563 containerd[1988]: 2026-01-24 00:39:01.286 [INFO][4733] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="50f52f4892d650aeefb72b4722909888ffcc1126a23b4bf351a76416ac7eb0b5" HandleID="k8s-pod-network.50f52f4892d650aeefb72b4722909888ffcc1126a23b4bf351a76416ac7eb0b5" Workload="ip--172--31--23--37-k8s-whisker--764954c6fc--ns4t5-eth0" Jan 24 00:39:01.439563 containerd[1988]: 2026-01-24 00:39:01.286 [INFO][4733] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="50f52f4892d650aeefb72b4722909888ffcc1126a23b4bf351a76416ac7eb0b5" HandleID="k8s-pod-network.50f52f4892d650aeefb72b4722909888ffcc1126a23b4bf351a76416ac7eb0b5" Workload="ip--172--31--23--37-k8s-whisker--764954c6fc--ns4t5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f6b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-37", "pod":"whisker-764954c6fc-ns4t5", "timestamp":"2026-01-24 00:39:01.286266213 +0000 UTC"}, Hostname:"ip-172-31-23-37", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:39:01.439563 containerd[1988]: 2026-01-24 00:39:01.286 [INFO][4733] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:39:01.439563 containerd[1988]: 2026-01-24 00:39:01.286 [INFO][4733] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:39:01.439563 containerd[1988]: 2026-01-24 00:39:01.286 [INFO][4733] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-37' Jan 24 00:39:01.439563 containerd[1988]: 2026-01-24 00:39:01.301 [INFO][4733] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.50f52f4892d650aeefb72b4722909888ffcc1126a23b4bf351a76416ac7eb0b5" host="ip-172-31-23-37" Jan 24 00:39:01.439563 containerd[1988]: 2026-01-24 00:39:01.330 [INFO][4733] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-37" Jan 24 00:39:01.439563 containerd[1988]: 2026-01-24 00:39:01.345 [INFO][4733] ipam/ipam.go 511: Trying affinity for 192.168.114.0/26 host="ip-172-31-23-37" Jan 24 00:39:01.439563 containerd[1988]: 2026-01-24 00:39:01.350 [INFO][4733] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.0/26 host="ip-172-31-23-37" Jan 24 00:39:01.439563 containerd[1988]: 2026-01-24 00:39:01.354 [INFO][4733] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ip-172-31-23-37" Jan 24 00:39:01.439563 containerd[1988]: 2026-01-24 00:39:01.354 [INFO][4733] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.50f52f4892d650aeefb72b4722909888ffcc1126a23b4bf351a76416ac7eb0b5" host="ip-172-31-23-37" Jan 24 00:39:01.439563 containerd[1988]: 2026-01-24 00:39:01.356 [INFO][4733] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.50f52f4892d650aeefb72b4722909888ffcc1126a23b4bf351a76416ac7eb0b5 Jan 24 00:39:01.439563 containerd[1988]: 2026-01-24 00:39:01.362 [INFO][4733] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.50f52f4892d650aeefb72b4722909888ffcc1126a23b4bf351a76416ac7eb0b5" host="ip-172-31-23-37" Jan 24 00:39:01.439563 containerd[1988]: 2026-01-24 00:39:01.370 [INFO][4733] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.1/26] block=192.168.114.0/26 handle="k8s-pod-network.50f52f4892d650aeefb72b4722909888ffcc1126a23b4bf351a76416ac7eb0b5" host="ip-172-31-23-37" Jan 24 00:39:01.439563 containerd[1988]: 2026-01-24 00:39:01.370 [INFO][4733] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.1/26] handle="k8s-pod-network.50f52f4892d650aeefb72b4722909888ffcc1126a23b4bf351a76416ac7eb0b5" host="ip-172-31-23-37" Jan 24 00:39:01.439563 containerd[1988]: 2026-01-24 00:39:01.370 [INFO][4733] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:39:01.439563 containerd[1988]: 2026-01-24 00:39:01.370 [INFO][4733] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.1/26] IPv6=[] ContainerID="50f52f4892d650aeefb72b4722909888ffcc1126a23b4bf351a76416ac7eb0b5" HandleID="k8s-pod-network.50f52f4892d650aeefb72b4722909888ffcc1126a23b4bf351a76416ac7eb0b5" Workload="ip--172--31--23--37-k8s-whisker--764954c6fc--ns4t5-eth0" Jan 24 00:39:01.442083 containerd[1988]: 2026-01-24 00:39:01.377 [INFO][4720] cni-plugin/k8s.go 418: Populated endpoint ContainerID="50f52f4892d650aeefb72b4722909888ffcc1126a23b4bf351a76416ac7eb0b5" Namespace="calico-system" Pod="whisker-764954c6fc-ns4t5" WorkloadEndpoint="ip--172--31--23--37-k8s-whisker--764954c6fc--ns4t5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--37-k8s-whisker--764954c6fc--ns4t5-eth0", GenerateName:"whisker-764954c6fc-", Namespace:"calico-system", SelfLink:"", UID:"e59d88b0-80a3-4d3a-8f96-ae389146720c", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 39, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"764954c6fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-37", ContainerID:"", Pod:"whisker-764954c6fc-ns4t5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.114.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali72282eb4efe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:39:01.442083 containerd[1988]: 2026-01-24 00:39:01.377 [INFO][4720] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.1/32] ContainerID="50f52f4892d650aeefb72b4722909888ffcc1126a23b4bf351a76416ac7eb0b5" Namespace="calico-system" Pod="whisker-764954c6fc-ns4t5" WorkloadEndpoint="ip--172--31--23--37-k8s-whisker--764954c6fc--ns4t5-eth0" Jan 24 00:39:01.442083 containerd[1988]: 2026-01-24 00:39:01.377 [INFO][4720] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali72282eb4efe ContainerID="50f52f4892d650aeefb72b4722909888ffcc1126a23b4bf351a76416ac7eb0b5" Namespace="calico-system" Pod="whisker-764954c6fc-ns4t5" WorkloadEndpoint="ip--172--31--23--37-k8s-whisker--764954c6fc--ns4t5-eth0" Jan 24 00:39:01.442083 containerd[1988]: 2026-01-24 00:39:01.405 [INFO][4720] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="50f52f4892d650aeefb72b4722909888ffcc1126a23b4bf351a76416ac7eb0b5" Namespace="calico-system" Pod="whisker-764954c6fc-ns4t5" WorkloadEndpoint="ip--172--31--23--37-k8s-whisker--764954c6fc--ns4t5-eth0" Jan 24 00:39:01.442083 containerd[1988]: 2026-01-24 00:39:01.406 [INFO][4720] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="50f52f4892d650aeefb72b4722909888ffcc1126a23b4bf351a76416ac7eb0b5" Namespace="calico-system" Pod="whisker-764954c6fc-ns4t5" WorkloadEndpoint="ip--172--31--23--37-k8s-whisker--764954c6fc--ns4t5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--37-k8s-whisker--764954c6fc--ns4t5-eth0", GenerateName:"whisker-764954c6fc-", Namespace:"calico-system", SelfLink:"", UID:"e59d88b0-80a3-4d3a-8f96-ae389146720c", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 39, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"764954c6fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-37", ContainerID:"50f52f4892d650aeefb72b4722909888ffcc1126a23b4bf351a76416ac7eb0b5", Pod:"whisker-764954c6fc-ns4t5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.114.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali72282eb4efe", MAC:"ce:49:2a:c0:c2:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:39:01.442083 containerd[1988]: 2026-01-24 00:39:01.434 [INFO][4720] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="50f52f4892d650aeefb72b4722909888ffcc1126a23b4bf351a76416ac7eb0b5" Namespace="calico-system" Pod="whisker-764954c6fc-ns4t5" WorkloadEndpoint="ip--172--31--23--37-k8s-whisker--764954c6fc--ns4t5-eth0" Jan 24 00:39:01.521752 containerd[1988]: time="2026-01-24T00:39:01.521332667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:39:01.522620 containerd[1988]: time="2026-01-24T00:39:01.522430799Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:39:01.522620 containerd[1988]: time="2026-01-24T00:39:01.522459445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:01.522620 containerd[1988]: time="2026-01-24T00:39:01.522565130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:01.602570 systemd[1]: run-containerd-runc-k8s.io-50f52f4892d650aeefb72b4722909888ffcc1126a23b4bf351a76416ac7eb0b5-runc.thI0M7.mount: Deactivated successfully. Jan 24 00:39:01.611933 systemd[1]: Started cri-containerd-50f52f4892d650aeefb72b4722909888ffcc1126a23b4bf351a76416ac7eb0b5.scope - libcontainer container 50f52f4892d650aeefb72b4722909888ffcc1126a23b4bf351a76416ac7eb0b5. Jan 24 00:39:01.679441 containerd[1988]: time="2026-01-24T00:39:01.678728307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-764954c6fc-ns4t5,Uid:e59d88b0-80a3-4d3a-8f96-ae389146720c,Namespace:calico-system,Attempt:0,} returns sandbox id \"50f52f4892d650aeefb72b4722909888ffcc1126a23b4bf351a76416ac7eb0b5\"" Jan 24 00:39:01.682915 containerd[1988]: time="2026-01-24T00:39:01.682276804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:39:01.945192 containerd[1988]: time="2026-01-24T00:39:01.945130097Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:39:01.968873 containerd[1988]: time="2026-01-24T00:39:01.947923755Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:39:01.969181 containerd[1988]: time="2026-01-24T00:39:01.948582044Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:39:01.999297 kubelet[3194]: E0124 00:39:01.998781 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:39:02.000658 kubelet[3194]: E0124 00:39:01.999917 3194 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:39:02.059715 kubelet[3194]: E0124 00:39:02.059629 3194 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a61c7262178c49c787cf179bd2771f88,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-klmv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-764954c6fc-ns4t5_calico-system(e59d88b0-80a3-4d3a-8f96-ae389146720c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:39:02.064305 containerd[1988]: time="2026-01-24T00:39:02.064039045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:39:02.361192 containerd[1988]: time="2026-01-24T00:39:02.361056685Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:39:02.362720 containerd[1988]: time="2026-01-24T00:39:02.362588883Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:39:02.362855 containerd[1988]: time="2026-01-24T00:39:02.362626890Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:39:02.363136 kubelet[3194]: E0124 00:39:02.363075 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:39:02.363468 kubelet[3194]: E0124 00:39:02.363137 3194 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:39:02.363542 kubelet[3194]: E0124 00:39:02.363309 3194 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-klmv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-764954c6fc-ns4t5_calico-system(e59d88b0-80a3-4d3a-8f96-ae389146720c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:39:02.365025 kubelet[3194]: E0124 00:39:02.364907 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764954c6fc-ns4t5" podUID="e59d88b0-80a3-4d3a-8f96-ae389146720c" Jan 24 00:39:02.569898 systemd-networkd[1900]: cali72282eb4efe: Gained IPv6LL Jan 24 00:39:02.628717 kubelet[3194]: E0124 00:39:02.627621 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764954c6fc-ns4t5" podUID="e59d88b0-80a3-4d3a-8f96-ae389146720c" Jan 24 00:39:03.629511 kubelet[3194]: E0124 00:39:03.629465 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764954c6fc-ns4t5" podUID="e59d88b0-80a3-4d3a-8f96-ae389146720c" Jan 24 00:39:04.004334 kubelet[3194]: I0124 00:39:04.003834 3194 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:39:04.119651 systemd[1]: run-containerd-runc-k8s.io-667fba57270061577d6af21dabf31d62407a7d142686faac2ebdd10f110b9a82-runc.IIM3Ve.mount: Deactivated successfully. Jan 24 00:39:04.258261 containerd[1988]: time="2026-01-24T00:39:04.258128547Z" level=info msg="StopPodSandbox for \"2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b\"" Jan 24 00:39:04.282540 systemd[1]: run-containerd-runc-k8s.io-667fba57270061577d6af21dabf31d62407a7d142686faac2ebdd10f110b9a82-runc.ecde6I.mount: Deactivated successfully. Jan 24 00:39:04.415743 containerd[1988]: 2026-01-24 00:39:04.341 [INFO][4874] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" Jan 24 00:39:04.415743 containerd[1988]: 2026-01-24 00:39:04.343 [INFO][4874] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" iface="eth0" netns="/var/run/netns/cni-6a1981f8-7461-7167-a5ab-465a2d62b244" Jan 24 00:39:04.415743 containerd[1988]: 2026-01-24 00:39:04.344 [INFO][4874] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" iface="eth0" netns="/var/run/netns/cni-6a1981f8-7461-7167-a5ab-465a2d62b244" Jan 24 00:39:04.415743 containerd[1988]: 2026-01-24 00:39:04.345 [INFO][4874] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" iface="eth0" netns="/var/run/netns/cni-6a1981f8-7461-7167-a5ab-465a2d62b244" Jan 24 00:39:04.415743 containerd[1988]: 2026-01-24 00:39:04.345 [INFO][4874] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" Jan 24 00:39:04.415743 containerd[1988]: 2026-01-24 00:39:04.346 [INFO][4874] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" Jan 24 00:39:04.415743 containerd[1988]: 2026-01-24 00:39:04.387 [INFO][4892] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" HandleID="k8s-pod-network.2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" Workload="ip--172--31--23--37-k8s-coredns--668d6bf9bc--28dmx-eth0" Jan 24 00:39:04.415743 containerd[1988]: 2026-01-24 00:39:04.388 [INFO][4892] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:39:04.415743 containerd[1988]: 2026-01-24 00:39:04.388 [INFO][4892] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:39:04.415743 containerd[1988]: 2026-01-24 00:39:04.400 [WARNING][4892] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" HandleID="k8s-pod-network.2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" Workload="ip--172--31--23--37-k8s-coredns--668d6bf9bc--28dmx-eth0" Jan 24 00:39:04.415743 containerd[1988]: 2026-01-24 00:39:04.400 [INFO][4892] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" HandleID="k8s-pod-network.2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" Workload="ip--172--31--23--37-k8s-coredns--668d6bf9bc--28dmx-eth0" Jan 24 00:39:04.415743 containerd[1988]: 2026-01-24 00:39:04.402 [INFO][4892] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:39:04.415743 containerd[1988]: 2026-01-24 00:39:04.409 [INFO][4874] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" Jan 24 00:39:04.419257 containerd[1988]: time="2026-01-24T00:39:04.416481147Z" level=info msg="TearDown network for sandbox \"2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b\" successfully" Jan 24 00:39:04.419257 containerd[1988]: time="2026-01-24T00:39:04.416521414Z" level=info msg="StopPodSandbox for \"2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b\" returns successfully" Jan 24 00:39:04.419257 containerd[1988]: time="2026-01-24T00:39:04.418682333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-28dmx,Uid:1e4ae984-32f1-4342-8042-eb57d3f9ba21,Namespace:kube-system,Attempt:1,}" Jan 24 00:39:04.424799 systemd[1]: run-netns-cni\x2d6a1981f8\x2d7461\x2d7167\x2da5ab\x2d465a2d62b244.mount: Deactivated successfully. Jan 24 00:39:04.665488 systemd-networkd[1900]: calic917fff5863: Link UP Jan 24 00:39:04.668095 systemd-networkd[1900]: calic917fff5863: Gained carrier Jan 24 00:39:04.671959 (udev-worker)[4937]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:39:04.694216 containerd[1988]: 2026-01-24 00:39:04.498 [INFO][4912] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:39:04.694216 containerd[1988]: 2026-01-24 00:39:04.517 [INFO][4912] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--37-k8s-coredns--668d6bf9bc--28dmx-eth0 coredns-668d6bf9bc- kube-system 1e4ae984-32f1-4342-8042-eb57d3f9ba21 939 0 2026-01-24 00:38:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-37 coredns-668d6bf9bc-28dmx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic917fff5863 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="87809e45741a2034588f0e40f4006eed6b73e6780e3c746e4d20eb3ac934b510" Namespace="kube-system" Pod="coredns-668d6bf9bc-28dmx" WorkloadEndpoint="ip--172--31--23--37-k8s-coredns--668d6bf9bc--28dmx-" Jan 24 00:39:04.694216 containerd[1988]: 2026-01-24 00:39:04.517 [INFO][4912] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="87809e45741a2034588f0e40f4006eed6b73e6780e3c746e4d20eb3ac934b510" Namespace="kube-system" Pod="coredns-668d6bf9bc-28dmx" WorkloadEndpoint="ip--172--31--23--37-k8s-coredns--668d6bf9bc--28dmx-eth0" Jan 24 00:39:04.694216 containerd[1988]: 2026-01-24 00:39:04.583 [INFO][4930] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="87809e45741a2034588f0e40f4006eed6b73e6780e3c746e4d20eb3ac934b510" HandleID="k8s-pod-network.87809e45741a2034588f0e40f4006eed6b73e6780e3c746e4d20eb3ac934b510" Workload="ip--172--31--23--37-k8s-coredns--668d6bf9bc--28dmx-eth0" Jan 24 00:39:04.694216 containerd[1988]: 2026-01-24 00:39:04.585 [INFO][4930] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="87809e45741a2034588f0e40f4006eed6b73e6780e3c746e4d20eb3ac934b510" HandleID="k8s-pod-network.87809e45741a2034588f0e40f4006eed6b73e6780e3c746e4d20eb3ac934b510" Workload="ip--172--31--23--37-k8s-coredns--668d6bf9bc--28dmx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f750), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-37", "pod":"coredns-668d6bf9bc-28dmx", "timestamp":"2026-01-24 00:39:04.58315168 +0000 UTC"}, Hostname:"ip-172-31-23-37", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:39:04.694216 containerd[1988]: 2026-01-24 00:39:04.585 [INFO][4930] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:39:04.694216 containerd[1988]: 2026-01-24 00:39:04.585 [INFO][4930] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:39:04.694216 containerd[1988]: 2026-01-24 00:39:04.585 [INFO][4930] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-37' Jan 24 00:39:04.694216 containerd[1988]: 2026-01-24 00:39:04.595 [INFO][4930] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.87809e45741a2034588f0e40f4006eed6b73e6780e3c746e4d20eb3ac934b510" host="ip-172-31-23-37" Jan 24 00:39:04.694216 containerd[1988]: 2026-01-24 00:39:04.606 [INFO][4930] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-37" Jan 24 00:39:04.694216 containerd[1988]: 2026-01-24 00:39:04.612 [INFO][4930] ipam/ipam.go 511: Trying affinity for 192.168.114.0/26 host="ip-172-31-23-37" Jan 24 00:39:04.694216 containerd[1988]: 2026-01-24 00:39:04.615 [INFO][4930] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.0/26 host="ip-172-31-23-37" Jan 24 00:39:04.694216 containerd[1988]: 2026-01-24 00:39:04.618 [INFO][4930] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ip-172-31-23-37" Jan 24 00:39:04.694216 containerd[1988]: 2026-01-24 00:39:04.618 [INFO][4930] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.87809e45741a2034588f0e40f4006eed6b73e6780e3c746e4d20eb3ac934b510" host="ip-172-31-23-37" Jan 24 00:39:04.694216 containerd[1988]: 2026-01-24 00:39:04.622 [INFO][4930] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.87809e45741a2034588f0e40f4006eed6b73e6780e3c746e4d20eb3ac934b510 Jan 24 00:39:04.694216 containerd[1988]: 2026-01-24 00:39:04.631 [INFO][4930] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.87809e45741a2034588f0e40f4006eed6b73e6780e3c746e4d20eb3ac934b510" host="ip-172-31-23-37" Jan 24 00:39:04.694216 containerd[1988]: 2026-01-24 00:39:04.640 [INFO][4930] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.2/26] block=192.168.114.0/26 handle="k8s-pod-network.87809e45741a2034588f0e40f4006eed6b73e6780e3c746e4d20eb3ac934b510" host="ip-172-31-23-37" Jan 24 00:39:04.694216 containerd[1988]: 2026-01-24 00:39:04.643 [INFO][4930] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.2/26] handle="k8s-pod-network.87809e45741a2034588f0e40f4006eed6b73e6780e3c746e4d20eb3ac934b510" host="ip-172-31-23-37" Jan 24 00:39:04.694216 containerd[1988]: 2026-01-24 00:39:04.643 [INFO][4930] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:39:04.694216 containerd[1988]: 2026-01-24 00:39:04.643 [INFO][4930] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.2/26] IPv6=[] ContainerID="87809e45741a2034588f0e40f4006eed6b73e6780e3c746e4d20eb3ac934b510" HandleID="k8s-pod-network.87809e45741a2034588f0e40f4006eed6b73e6780e3c746e4d20eb3ac934b510" Workload="ip--172--31--23--37-k8s-coredns--668d6bf9bc--28dmx-eth0" Jan 24 00:39:04.697581 containerd[1988]: 2026-01-24 00:39:04.654 [INFO][4912] cni-plugin/k8s.go 418: Populated endpoint ContainerID="87809e45741a2034588f0e40f4006eed6b73e6780e3c746e4d20eb3ac934b510" Namespace="kube-system" Pod="coredns-668d6bf9bc-28dmx" WorkloadEndpoint="ip--172--31--23--37-k8s-coredns--668d6bf9bc--28dmx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--37-k8s-coredns--668d6bf9bc--28dmx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1e4ae984-32f1-4342-8042-eb57d3f9ba21", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 38, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-37", ContainerID:"", Pod:"coredns-668d6bf9bc-28dmx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic917fff5863", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:39:04.697581 containerd[1988]: 2026-01-24 00:39:04.654 [INFO][4912] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.2/32] ContainerID="87809e45741a2034588f0e40f4006eed6b73e6780e3c746e4d20eb3ac934b510" Namespace="kube-system" Pod="coredns-668d6bf9bc-28dmx" WorkloadEndpoint="ip--172--31--23--37-k8s-coredns--668d6bf9bc--28dmx-eth0" Jan 24 00:39:04.697581 containerd[1988]: 2026-01-24 00:39:04.654 [INFO][4912] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic917fff5863 ContainerID="87809e45741a2034588f0e40f4006eed6b73e6780e3c746e4d20eb3ac934b510" Namespace="kube-system" Pod="coredns-668d6bf9bc-28dmx" WorkloadEndpoint="ip--172--31--23--37-k8s-coredns--668d6bf9bc--28dmx-eth0" Jan 24 00:39:04.697581 containerd[1988]: 2026-01-24 00:39:04.667 [INFO][4912] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="87809e45741a2034588f0e40f4006eed6b73e6780e3c746e4d20eb3ac934b510" Namespace="kube-system" Pod="coredns-668d6bf9bc-28dmx" WorkloadEndpoint="ip--172--31--23--37-k8s-coredns--668d6bf9bc--28dmx-eth0" Jan 24 00:39:04.697581 containerd[1988]: 2026-01-24 00:39:04.667 [INFO][4912] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="87809e45741a2034588f0e40f4006eed6b73e6780e3c746e4d20eb3ac934b510" Namespace="kube-system" Pod="coredns-668d6bf9bc-28dmx" WorkloadEndpoint="ip--172--31--23--37-k8s-coredns--668d6bf9bc--28dmx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--37-k8s-coredns--668d6bf9bc--28dmx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1e4ae984-32f1-4342-8042-eb57d3f9ba21", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 38, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-37", ContainerID:"87809e45741a2034588f0e40f4006eed6b73e6780e3c746e4d20eb3ac934b510", Pod:"coredns-668d6bf9bc-28dmx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic917fff5863", MAC:"ae:72:5b:eb:a4:31", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:39:04.697581 containerd[1988]: 2026-01-24 00:39:04.685 [INFO][4912] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="87809e45741a2034588f0e40f4006eed6b73e6780e3c746e4d20eb3ac934b510" Namespace="kube-system" Pod="coredns-668d6bf9bc-28dmx" WorkloadEndpoint="ip--172--31--23--37-k8s-coredns--668d6bf9bc--28dmx-eth0" Jan 24 00:39:04.730295 containerd[1988]: time="2026-01-24T00:39:04.730132413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:39:04.730822 containerd[1988]: time="2026-01-24T00:39:04.730319599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:39:04.730822 containerd[1988]: time="2026-01-24T00:39:04.730410031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:04.730822 containerd[1988]: time="2026-01-24T00:39:04.730577087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:04.758650 systemd[1]: Started cri-containerd-87809e45741a2034588f0e40f4006eed6b73e6780e3c746e4d20eb3ac934b510.scope - libcontainer container 87809e45741a2034588f0e40f4006eed6b73e6780e3c746e4d20eb3ac934b510. Jan 24 00:39:04.807684 containerd[1988]: time="2026-01-24T00:39:04.807638928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-28dmx,Uid:1e4ae984-32f1-4342-8042-eb57d3f9ba21,Namespace:kube-system,Attempt:1,} returns sandbox id \"87809e45741a2034588f0e40f4006eed6b73e6780e3c746e4d20eb3ac934b510\"" Jan 24 00:39:04.811957 containerd[1988]: time="2026-01-24T00:39:04.811913712Z" level=info msg="CreateContainer within sandbox \"87809e45741a2034588f0e40f4006eed6b73e6780e3c746e4d20eb3ac934b510\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:39:05.022175 containerd[1988]: time="2026-01-24T00:39:05.018714076Z" level=info msg="CreateContainer within sandbox \"87809e45741a2034588f0e40f4006eed6b73e6780e3c746e4d20eb3ac934b510\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b7386c2080d561ca08b21fbdbcbdc89ad6caf30ead0ae22b20877a95d2f2118a\"" Jan 24 00:39:05.022175 containerd[1988]: time="2026-01-24T00:39:05.019938561Z" level=info msg="StartContainer for \"b7386c2080d561ca08b21fbdbcbdc89ad6caf30ead0ae22b20877a95d2f2118a\"" Jan 24 00:39:05.053620 systemd[1]: Started cri-containerd-b7386c2080d561ca08b21fbdbcbdc89ad6caf30ead0ae22b20877a95d2f2118a.scope - libcontainer container b7386c2080d561ca08b21fbdbcbdc89ad6caf30ead0ae22b20877a95d2f2118a. Jan 24 00:39:05.138918 containerd[1988]: time="2026-01-24T00:39:05.138846870Z" level=info msg="StartContainer for \"b7386c2080d561ca08b21fbdbcbdc89ad6caf30ead0ae22b20877a95d2f2118a\" returns successfully" Jan 24 00:39:05.262934 containerd[1988]: time="2026-01-24T00:39:05.262589531Z" level=info msg="StopPodSandbox for \"6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315\"" Jan 24 00:39:05.265409 containerd[1988]: time="2026-01-24T00:39:05.263504509Z" level=info msg="StopPodSandbox for \"91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233\"" Jan 24 00:39:05.473936 containerd[1988]: 2026-01-24 00:39:05.379 [INFO][5035] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" Jan 24 00:39:05.473936 containerd[1988]: 2026-01-24 00:39:05.380 [INFO][5035] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" iface="eth0" netns="/var/run/netns/cni-e3f1f6f8-d7c1-29e5-56ef-a1ab30c6358e" Jan 24 00:39:05.473936 containerd[1988]: 2026-01-24 00:39:05.381 [INFO][5035] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" iface="eth0" netns="/var/run/netns/cni-e3f1f6f8-d7c1-29e5-56ef-a1ab30c6358e" Jan 24 00:39:05.473936 containerd[1988]: 2026-01-24 00:39:05.382 [INFO][5035] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" iface="eth0" netns="/var/run/netns/cni-e3f1f6f8-d7c1-29e5-56ef-a1ab30c6358e" Jan 24 00:39:05.473936 containerd[1988]: 2026-01-24 00:39:05.382 [INFO][5035] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" Jan 24 00:39:05.473936 containerd[1988]: 2026-01-24 00:39:05.382 [INFO][5035] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" Jan 24 00:39:05.473936 containerd[1988]: 2026-01-24 00:39:05.447 [INFO][5049] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" HandleID="k8s-pod-network.6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" Workload="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--tmnz4-eth0" Jan 24 00:39:05.473936 containerd[1988]: 2026-01-24 00:39:05.449 [INFO][5049] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:39:05.473936 containerd[1988]: 2026-01-24 00:39:05.449 [INFO][5049] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:39:05.473936 containerd[1988]: 2026-01-24 00:39:05.462 [WARNING][5049] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" HandleID="k8s-pod-network.6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" Workload="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--tmnz4-eth0" Jan 24 00:39:05.473936 containerd[1988]: 2026-01-24 00:39:05.462 [INFO][5049] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" HandleID="k8s-pod-network.6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" Workload="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--tmnz4-eth0" Jan 24 00:39:05.473936 containerd[1988]: 2026-01-24 00:39:05.467 [INFO][5049] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:39:05.473936 containerd[1988]: 2026-01-24 00:39:05.471 [INFO][5035] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" Jan 24 00:39:05.475235 containerd[1988]: time="2026-01-24T00:39:05.474088874Z" level=info msg="TearDown network for sandbox \"6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315\" successfully" Jan 24 00:39:05.475235 containerd[1988]: time="2026-01-24T00:39:05.474120813Z" level=info msg="StopPodSandbox for \"6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315\" returns successfully" Jan 24 00:39:05.477432 containerd[1988]: time="2026-01-24T00:39:05.475670902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d8fb494d-tmnz4,Uid:559b3199-5162-436c-ae6f-2ec7000948df,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:39:05.483265 systemd[1]: run-netns-cni\x2de3f1f6f8\x2dd7c1\x2d29e5\x2d56ef\x2da1ab30c6358e.mount: Deactivated successfully. Jan 24 00:39:05.498546 containerd[1988]: 2026-01-24 00:39:05.390 [INFO][5036] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" Jan 24 00:39:05.498546 containerd[1988]: 2026-01-24 00:39:05.390 [INFO][5036] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" iface="eth0" netns="/var/run/netns/cni-46d5c014-3ae5-c900-f2bf-0a7af84cc97c" Jan 24 00:39:05.498546 containerd[1988]: 2026-01-24 00:39:05.390 [INFO][5036] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" iface="eth0" netns="/var/run/netns/cni-46d5c014-3ae5-c900-f2bf-0a7af84cc97c" Jan 24 00:39:05.498546 containerd[1988]: 2026-01-24 00:39:05.391 [INFO][5036] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" iface="eth0" netns="/var/run/netns/cni-46d5c014-3ae5-c900-f2bf-0a7af84cc97c" Jan 24 00:39:05.498546 containerd[1988]: 2026-01-24 00:39:05.391 [INFO][5036] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" Jan 24 00:39:05.498546 containerd[1988]: 2026-01-24 00:39:05.391 [INFO][5036] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" Jan 24 00:39:05.498546 containerd[1988]: 2026-01-24 00:39:05.465 [INFO][5054] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" HandleID="k8s-pod-network.91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" Workload="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--phlb5-eth0" Jan 24 00:39:05.498546 containerd[1988]: 2026-01-24 00:39:05.465 [INFO][5054] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:39:05.498546 containerd[1988]: 2026-01-24 00:39:05.467 [INFO][5054] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:39:05.498546 containerd[1988]: 2026-01-24 00:39:05.484 [WARNING][5054] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" HandleID="k8s-pod-network.91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" Workload="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--phlb5-eth0" Jan 24 00:39:05.498546 containerd[1988]: 2026-01-24 00:39:05.484 [INFO][5054] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" HandleID="k8s-pod-network.91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" Workload="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--phlb5-eth0" Jan 24 00:39:05.498546 containerd[1988]: 2026-01-24 00:39:05.487 [INFO][5054] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:39:05.498546 containerd[1988]: 2026-01-24 00:39:05.492 [INFO][5036] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" Jan 24 00:39:05.498546 containerd[1988]: time="2026-01-24T00:39:05.498459043Z" level=info msg="TearDown network for sandbox \"91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233\" successfully" Jan 24 00:39:05.498546 containerd[1988]: time="2026-01-24T00:39:05.498490626Z" level=info msg="StopPodSandbox for \"91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233\" returns successfully" Jan 24 00:39:05.502576 containerd[1988]: time="2026-01-24T00:39:05.501078957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d8fb494d-phlb5,Uid:6c246b84-9265-4837-8997-3779f5365703,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:39:05.515031 systemd[1]: run-netns-cni\x2d46d5c014\x2d3ae5\x2dc900\x2df2bf\x2d0a7af84cc97c.mount: Deactivated successfully. Jan 24 00:39:05.747152 systemd-networkd[1900]: cali4933b834dcd: Link UP Jan 24 00:39:05.749165 systemd-networkd[1900]: cali4933b834dcd: Gained carrier Jan 24 00:39:05.767981 kubelet[3194]: I0124 00:39:05.767922 3194 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-28dmx" podStartSLOduration=41.76789724 podStartE2EDuration="41.76789724s" podCreationTimestamp="2026-01-24 00:38:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:39:05.663749149 +0000 UTC m=+46.584795938" watchObservedRunningTime="2026-01-24 00:39:05.76789724 +0000 UTC m=+46.688944029" Jan 24 00:39:05.773893 containerd[1988]: 2026-01-24 00:39:05.572 [INFO][5065] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:39:05.773893 containerd[1988]: 2026-01-24 00:39:05.599 [INFO][5065] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--tmnz4-eth0 calico-apiserver-5d8fb494d- calico-apiserver 559b3199-5162-436c-ae6f-2ec7000948df 951 0 2026-01-24 00:38:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d8fb494d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-37 calico-apiserver-5d8fb494d-tmnz4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4933b834dcd [] [] }} ContainerID="34efd203cc78447b7cfb14ed4bcacae760aa3f75cdeb929ac0ec6ec822b16dab" Namespace="calico-apiserver" Pod="calico-apiserver-5d8fb494d-tmnz4" WorkloadEndpoint="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--tmnz4-" Jan 24 00:39:05.773893 containerd[1988]: 2026-01-24 00:39:05.601 [INFO][5065] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="34efd203cc78447b7cfb14ed4bcacae760aa3f75cdeb929ac0ec6ec822b16dab" Namespace="calico-apiserver" Pod="calico-apiserver-5d8fb494d-tmnz4" WorkloadEndpoint="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--tmnz4-eth0" Jan 24 00:39:05.773893 containerd[1988]: 2026-01-24 00:39:05.677 [INFO][5087] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="34efd203cc78447b7cfb14ed4bcacae760aa3f75cdeb929ac0ec6ec822b16dab" HandleID="k8s-pod-network.34efd203cc78447b7cfb14ed4bcacae760aa3f75cdeb929ac0ec6ec822b16dab" Workload="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--tmnz4-eth0" Jan 24 00:39:05.773893 containerd[1988]: 2026-01-24 00:39:05.679 [INFO][5087] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="34efd203cc78447b7cfb14ed4bcacae760aa3f75cdeb929ac0ec6ec822b16dab" HandleID="k8s-pod-network.34efd203cc78447b7cfb14ed4bcacae760aa3f75cdeb929ac0ec6ec822b16dab" Workload="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--tmnz4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000304780), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-37", "pod":"calico-apiserver-5d8fb494d-tmnz4", "timestamp":"2026-01-24 00:39:05.677789727 +0000 UTC"}, Hostname:"ip-172-31-23-37", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:39:05.773893 containerd[1988]: 2026-01-24 00:39:05.679 [INFO][5087] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:39:05.773893 containerd[1988]: 2026-01-24 00:39:05.679 [INFO][5087] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:39:05.773893 containerd[1988]: 2026-01-24 00:39:05.679 [INFO][5087] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-37' Jan 24 00:39:05.773893 containerd[1988]: 2026-01-24 00:39:05.689 [INFO][5087] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.34efd203cc78447b7cfb14ed4bcacae760aa3f75cdeb929ac0ec6ec822b16dab" host="ip-172-31-23-37" Jan 24 00:39:05.773893 containerd[1988]: 2026-01-24 00:39:05.698 [INFO][5087] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-37" Jan 24 00:39:05.773893 containerd[1988]: 2026-01-24 00:39:05.705 [INFO][5087] ipam/ipam.go 511: Trying affinity for 192.168.114.0/26 host="ip-172-31-23-37" Jan 24 00:39:05.773893 containerd[1988]: 2026-01-24 00:39:05.709 [INFO][5087] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.0/26 host="ip-172-31-23-37" Jan 24 00:39:05.773893 containerd[1988]: 2026-01-24 00:39:05.714 [INFO][5087] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ip-172-31-23-37" Jan 24 00:39:05.773893 containerd[1988]: 2026-01-24 00:39:05.714 [INFO][5087] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.34efd203cc78447b7cfb14ed4bcacae760aa3f75cdeb929ac0ec6ec822b16dab" host="ip-172-31-23-37" Jan 24 00:39:05.773893 containerd[1988]: 2026-01-24 00:39:05.716 [INFO][5087] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.34efd203cc78447b7cfb14ed4bcacae760aa3f75cdeb929ac0ec6ec822b16dab Jan 24 00:39:05.773893 containerd[1988]: 2026-01-24 00:39:05.727 [INFO][5087] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.34efd203cc78447b7cfb14ed4bcacae760aa3f75cdeb929ac0ec6ec822b16dab" host="ip-172-31-23-37" Jan 24 00:39:05.773893 containerd[1988]: 2026-01-24 00:39:05.738 [INFO][5087] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.3/26] block=192.168.114.0/26 handle="k8s-pod-network.34efd203cc78447b7cfb14ed4bcacae760aa3f75cdeb929ac0ec6ec822b16dab" host="ip-172-31-23-37" Jan 24 00:39:05.773893 containerd[1988]: 2026-01-24 00:39:05.738 [INFO][5087] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.3/26] handle="k8s-pod-network.34efd203cc78447b7cfb14ed4bcacae760aa3f75cdeb929ac0ec6ec822b16dab" host="ip-172-31-23-37" Jan 24 00:39:05.773893 containerd[1988]: 2026-01-24 00:39:05.738 [INFO][5087] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:39:05.773893 containerd[1988]: 2026-01-24 00:39:05.738 [INFO][5087] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.3/26] IPv6=[] ContainerID="34efd203cc78447b7cfb14ed4bcacae760aa3f75cdeb929ac0ec6ec822b16dab" HandleID="k8s-pod-network.34efd203cc78447b7cfb14ed4bcacae760aa3f75cdeb929ac0ec6ec822b16dab" Workload="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--tmnz4-eth0" Jan 24 00:39:05.775684 containerd[1988]: 2026-01-24 00:39:05.742 [INFO][5065] cni-plugin/k8s.go 418: Populated endpoint ContainerID="34efd203cc78447b7cfb14ed4bcacae760aa3f75cdeb929ac0ec6ec822b16dab" Namespace="calico-apiserver" Pod="calico-apiserver-5d8fb494d-tmnz4" WorkloadEndpoint="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--tmnz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--tmnz4-eth0", GenerateName:"calico-apiserver-5d8fb494d-", Namespace:"calico-apiserver", SelfLink:"", UID:"559b3199-5162-436c-ae6f-2ec7000948df", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 38, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d8fb494d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-37", ContainerID:"", Pod:"calico-apiserver-5d8fb494d-tmnz4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4933b834dcd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:39:05.775684 containerd[1988]: 2026-01-24 00:39:05.742 [INFO][5065] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.3/32] ContainerID="34efd203cc78447b7cfb14ed4bcacae760aa3f75cdeb929ac0ec6ec822b16dab" Namespace="calico-apiserver" Pod="calico-apiserver-5d8fb494d-tmnz4" WorkloadEndpoint="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--tmnz4-eth0" Jan 24 00:39:05.775684 containerd[1988]: 2026-01-24 00:39:05.742 [INFO][5065] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4933b834dcd ContainerID="34efd203cc78447b7cfb14ed4bcacae760aa3f75cdeb929ac0ec6ec822b16dab" Namespace="calico-apiserver" Pod="calico-apiserver-5d8fb494d-tmnz4" WorkloadEndpoint="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--tmnz4-eth0" Jan 24 00:39:05.775684 containerd[1988]: 2026-01-24 00:39:05.749 [INFO][5065] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="34efd203cc78447b7cfb14ed4bcacae760aa3f75cdeb929ac0ec6ec822b16dab" Namespace="calico-apiserver" Pod="calico-apiserver-5d8fb494d-tmnz4" WorkloadEndpoint="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--tmnz4-eth0" Jan 24 00:39:05.775684 containerd[1988]: 2026-01-24 00:39:05.751 [INFO][5065] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="34efd203cc78447b7cfb14ed4bcacae760aa3f75cdeb929ac0ec6ec822b16dab" Namespace="calico-apiserver" Pod="calico-apiserver-5d8fb494d-tmnz4" WorkloadEndpoint="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--tmnz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--tmnz4-eth0", GenerateName:"calico-apiserver-5d8fb494d-", Namespace:"calico-apiserver", SelfLink:"", UID:"559b3199-5162-436c-ae6f-2ec7000948df", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 38, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d8fb494d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-37", ContainerID:"34efd203cc78447b7cfb14ed4bcacae760aa3f75cdeb929ac0ec6ec822b16dab", Pod:"calico-apiserver-5d8fb494d-tmnz4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4933b834dcd", MAC:"7e:0b:33:bd:e7:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:39:05.775684 containerd[1988]: 2026-01-24 00:39:05.771 [INFO][5065] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="34efd203cc78447b7cfb14ed4bcacae760aa3f75cdeb929ac0ec6ec822b16dab" Namespace="calico-apiserver" Pod="calico-apiserver-5d8fb494d-tmnz4" WorkloadEndpoint="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--tmnz4-eth0" Jan 24 00:39:05.817208 containerd[1988]: time="2026-01-24T00:39:05.816577654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:39:05.817208 containerd[1988]: time="2026-01-24T00:39:05.816720020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:39:05.817208 containerd[1988]: time="2026-01-24T00:39:05.816745084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:05.821237 containerd[1988]: time="2026-01-24T00:39:05.817681256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:05.859632 systemd[1]: Started cri-containerd-34efd203cc78447b7cfb14ed4bcacae760aa3f75cdeb929ac0ec6ec822b16dab.scope - libcontainer container 34efd203cc78447b7cfb14ed4bcacae760aa3f75cdeb929ac0ec6ec822b16dab. Jan 24 00:39:05.882344 systemd-networkd[1900]: cali7c6e33f9202: Link UP Jan 24 00:39:05.885060 systemd-networkd[1900]: cali7c6e33f9202: Gained carrier Jan 24 00:39:05.923493 containerd[1988]: 2026-01-24 00:39:05.620 [INFO][5075] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:39:05.923493 containerd[1988]: 2026-01-24 00:39:05.648 [INFO][5075] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--phlb5-eth0 calico-apiserver-5d8fb494d- calico-apiserver 6c246b84-9265-4837-8997-3779f5365703 952 0 2026-01-24 00:38:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d8fb494d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-37 calico-apiserver-5d8fb494d-phlb5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7c6e33f9202 [] [] }} ContainerID="e555e26c30ed310ae0b19e00a79b2ab00e506f84bdd8d543527052f52fec738b" Namespace="calico-apiserver" Pod="calico-apiserver-5d8fb494d-phlb5" WorkloadEndpoint="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--phlb5-" Jan 24 00:39:05.923493 containerd[1988]: 2026-01-24 00:39:05.648 [INFO][5075] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e555e26c30ed310ae0b19e00a79b2ab00e506f84bdd8d543527052f52fec738b" Namespace="calico-apiserver" Pod="calico-apiserver-5d8fb494d-phlb5" WorkloadEndpoint="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--phlb5-eth0" Jan 24 00:39:05.923493 containerd[1988]: 2026-01-24 00:39:05.726 [INFO][5097] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e555e26c30ed310ae0b19e00a79b2ab00e506f84bdd8d543527052f52fec738b" HandleID="k8s-pod-network.e555e26c30ed310ae0b19e00a79b2ab00e506f84bdd8d543527052f52fec738b" Workload="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--phlb5-eth0" Jan 24 00:39:05.923493 containerd[1988]: 2026-01-24 00:39:05.726 [INFO][5097] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e555e26c30ed310ae0b19e00a79b2ab00e506f84bdd8d543527052f52fec738b" HandleID="k8s-pod-network.e555e26c30ed310ae0b19e00a79b2ab00e506f84bdd8d543527052f52fec738b" Workload="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--phlb5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f010), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-37", "pod":"calico-apiserver-5d8fb494d-phlb5", "timestamp":"2026-01-24 00:39:05.726148904 +0000 UTC"}, Hostname:"ip-172-31-23-37", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:39:05.923493 containerd[1988]: 2026-01-24 00:39:05.728 [INFO][5097] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:39:05.923493 containerd[1988]: 2026-01-24 00:39:05.738 [INFO][5097] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:39:05.923493 containerd[1988]: 2026-01-24 00:39:05.738 [INFO][5097] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-37' Jan 24 00:39:05.923493 containerd[1988]: 2026-01-24 00:39:05.791 [INFO][5097] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e555e26c30ed310ae0b19e00a79b2ab00e506f84bdd8d543527052f52fec738b" host="ip-172-31-23-37" Jan 24 00:39:05.923493 containerd[1988]: 2026-01-24 00:39:05.804 [INFO][5097] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-37" Jan 24 00:39:05.923493 containerd[1988]: 2026-01-24 00:39:05.818 [INFO][5097] ipam/ipam.go 511: Trying affinity for 192.168.114.0/26 host="ip-172-31-23-37" Jan 24 00:39:05.923493 containerd[1988]: 2026-01-24 00:39:05.824 [INFO][5097] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.0/26 host="ip-172-31-23-37" Jan 24 00:39:05.923493 containerd[1988]: 2026-01-24 00:39:05.827 [INFO][5097] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ip-172-31-23-37" Jan 24 00:39:05.923493 containerd[1988]: 2026-01-24 00:39:05.827 [INFO][5097] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.e555e26c30ed310ae0b19e00a79b2ab00e506f84bdd8d543527052f52fec738b" host="ip-172-31-23-37" Jan 24 00:39:05.923493 containerd[1988]: 2026-01-24 00:39:05.831 [INFO][5097] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e555e26c30ed310ae0b19e00a79b2ab00e506f84bdd8d543527052f52fec738b Jan 24 00:39:05.923493 containerd[1988]: 2026-01-24 00:39:05.849 [INFO][5097] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.e555e26c30ed310ae0b19e00a79b2ab00e506f84bdd8d543527052f52fec738b" host="ip-172-31-23-37" Jan 24 00:39:05.923493 containerd[1988]: 2026-01-24 00:39:05.864 [INFO][5097] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.4/26] block=192.168.114.0/26 handle="k8s-pod-network.e555e26c30ed310ae0b19e00a79b2ab00e506f84bdd8d543527052f52fec738b" host="ip-172-31-23-37" Jan 24 00:39:05.923493 containerd[1988]: 2026-01-24 00:39:05.864 [INFO][5097] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.4/26] handle="k8s-pod-network.e555e26c30ed310ae0b19e00a79b2ab00e506f84bdd8d543527052f52fec738b" host="ip-172-31-23-37" Jan 24 00:39:05.923493 containerd[1988]: 2026-01-24 00:39:05.864 [INFO][5097] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:39:05.923493 containerd[1988]: 2026-01-24 00:39:05.864 [INFO][5097] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.4/26] IPv6=[] ContainerID="e555e26c30ed310ae0b19e00a79b2ab00e506f84bdd8d543527052f52fec738b" HandleID="k8s-pod-network.e555e26c30ed310ae0b19e00a79b2ab00e506f84bdd8d543527052f52fec738b" Workload="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--phlb5-eth0" Jan 24 00:39:05.927599 containerd[1988]: 2026-01-24 00:39:05.873 [INFO][5075] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e555e26c30ed310ae0b19e00a79b2ab00e506f84bdd8d543527052f52fec738b" Namespace="calico-apiserver" Pod="calico-apiserver-5d8fb494d-phlb5" WorkloadEndpoint="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--phlb5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--phlb5-eth0", GenerateName:"calico-apiserver-5d8fb494d-", Namespace:"calico-apiserver", SelfLink:"", UID:"6c246b84-9265-4837-8997-3779f5365703", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 38, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d8fb494d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-37", ContainerID:"", Pod:"calico-apiserver-5d8fb494d-phlb5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7c6e33f9202", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:39:05.927599 containerd[1988]: 2026-01-24 00:39:05.874 [INFO][5075] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.4/32] ContainerID="e555e26c30ed310ae0b19e00a79b2ab00e506f84bdd8d543527052f52fec738b" Namespace="calico-apiserver" Pod="calico-apiserver-5d8fb494d-phlb5" WorkloadEndpoint="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--phlb5-eth0" Jan 24 00:39:05.927599 containerd[1988]: 2026-01-24 00:39:05.874 [INFO][5075] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c6e33f9202 ContainerID="e555e26c30ed310ae0b19e00a79b2ab00e506f84bdd8d543527052f52fec738b" Namespace="calico-apiserver" Pod="calico-apiserver-5d8fb494d-phlb5" WorkloadEndpoint="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--phlb5-eth0" Jan 24 00:39:05.927599 containerd[1988]: 2026-01-24 00:39:05.887 [INFO][5075] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e555e26c30ed310ae0b19e00a79b2ab00e506f84bdd8d543527052f52fec738b" Namespace="calico-apiserver" Pod="calico-apiserver-5d8fb494d-phlb5" WorkloadEndpoint="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--phlb5-eth0" Jan 24 00:39:05.927599 containerd[1988]: 2026-01-24 00:39:05.888 [INFO][5075] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e555e26c30ed310ae0b19e00a79b2ab00e506f84bdd8d543527052f52fec738b" Namespace="calico-apiserver" Pod="calico-apiserver-5d8fb494d-phlb5" WorkloadEndpoint="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--phlb5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--phlb5-eth0", GenerateName:"calico-apiserver-5d8fb494d-", Namespace:"calico-apiserver", SelfLink:"", UID:"6c246b84-9265-4837-8997-3779f5365703", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 38, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d8fb494d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-37", ContainerID:"e555e26c30ed310ae0b19e00a79b2ab00e506f84bdd8d543527052f52fec738b", Pod:"calico-apiserver-5d8fb494d-phlb5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7c6e33f9202", MAC:"9a:46:54:24:a0:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:39:05.927599 containerd[1988]: 2026-01-24 00:39:05.913 [INFO][5075] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e555e26c30ed310ae0b19e00a79b2ab00e506f84bdd8d543527052f52fec738b" Namespace="calico-apiserver" Pod="calico-apiserver-5d8fb494d-phlb5" WorkloadEndpoint="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--phlb5-eth0" Jan 24 00:39:05.982453 containerd[1988]: time="2026-01-24T00:39:05.982244629Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:39:05.982453 containerd[1988]: time="2026-01-24T00:39:05.982320444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:39:05.982453 containerd[1988]: time="2026-01-24T00:39:05.982339386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:05.983045 containerd[1988]: time="2026-01-24T00:39:05.982862891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:06.025703 systemd[1]: Started cri-containerd-e555e26c30ed310ae0b19e00a79b2ab00e506f84bdd8d543527052f52fec738b.scope - libcontainer container e555e26c30ed310ae0b19e00a79b2ab00e506f84bdd8d543527052f52fec738b. Jan 24 00:39:06.154554 containerd[1988]: time="2026-01-24T00:39:06.154199836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d8fb494d-tmnz4,Uid:559b3199-5162-436c-ae6f-2ec7000948df,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"34efd203cc78447b7cfb14ed4bcacae760aa3f75cdeb929ac0ec6ec822b16dab\"" Jan 24 00:39:06.158250 containerd[1988]: time="2026-01-24T00:39:06.157975125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:39:06.230297 kubelet[3194]: I0124 00:39:06.229759 3194 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:39:06.238092 containerd[1988]: time="2026-01-24T00:39:06.237553225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d8fb494d-phlb5,Uid:6c246b84-9265-4837-8997-3779f5365703,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e555e26c30ed310ae0b19e00a79b2ab00e506f84bdd8d543527052f52fec738b\"" Jan 24 00:39:06.257312 containerd[1988]: time="2026-01-24T00:39:06.256598550Z" level=info msg="StopPodSandbox for \"23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822\"" Jan 24 00:39:06.257736 containerd[1988]: time="2026-01-24T00:39:06.257704206Z" level=info msg="StopPodSandbox for \"1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17\"" Jan 24 00:39:06.407362 containerd[1988]: time="2026-01-24T00:39:06.406642866Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:39:06.410572 containerd[1988]: time="2026-01-24T00:39:06.410519778Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:39:06.411398 containerd[1988]: time="2026-01-24T00:39:06.410639673Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:39:06.411519 kubelet[3194]: E0124 00:39:06.410844 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:39:06.411519 kubelet[3194]: E0124 00:39:06.410901 3194 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:39:06.411519 kubelet[3194]: E0124 00:39:06.411193 3194 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mfz58,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d8fb494d-tmnz4_calico-apiserver(559b3199-5162-436c-ae6f-2ec7000948df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:39:06.413219 containerd[1988]: time="2026-01-24T00:39:06.412735422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:39:06.413535 kubelet[3194]: E0124 00:39:06.413397 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8fb494d-tmnz4" podUID="559b3199-5162-436c-ae6f-2ec7000948df" Jan 24 00:39:06.462031 containerd[1988]: 2026-01-24 00:39:06.355 [INFO][5238] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" Jan 24 00:39:06.462031 containerd[1988]: 2026-01-24 00:39:06.355 [INFO][5238] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" iface="eth0" netns="/var/run/netns/cni-df116684-5c16-bc3b-f780-2ae482417522" Jan 24 00:39:06.462031 containerd[1988]: 2026-01-24 00:39:06.356 [INFO][5238] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" iface="eth0" netns="/var/run/netns/cni-df116684-5c16-bc3b-f780-2ae482417522" Jan 24 00:39:06.462031 containerd[1988]: 2026-01-24 00:39:06.357 [INFO][5238] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" iface="eth0" netns="/var/run/netns/cni-df116684-5c16-bc3b-f780-2ae482417522" Jan 24 00:39:06.462031 containerd[1988]: 2026-01-24 00:39:06.357 [INFO][5238] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" Jan 24 00:39:06.462031 containerd[1988]: 2026-01-24 00:39:06.357 [INFO][5238] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" Jan 24 00:39:06.462031 containerd[1988]: 2026-01-24 00:39:06.424 [INFO][5252] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" HandleID="k8s-pod-network.1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" Workload="ip--172--31--23--37-k8s-goldmane--666569f655--qnfl2-eth0" Jan 24 00:39:06.462031 containerd[1988]: 2026-01-24 00:39:06.424 [INFO][5252] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:39:06.462031 containerd[1988]: 2026-01-24 00:39:06.424 [INFO][5252] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:39:06.462031 containerd[1988]: 2026-01-24 00:39:06.437 [WARNING][5252] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" HandleID="k8s-pod-network.1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" Workload="ip--172--31--23--37-k8s-goldmane--666569f655--qnfl2-eth0" Jan 24 00:39:06.462031 containerd[1988]: 2026-01-24 00:39:06.437 [INFO][5252] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" HandleID="k8s-pod-network.1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" Workload="ip--172--31--23--37-k8s-goldmane--666569f655--qnfl2-eth0" Jan 24 00:39:06.462031 containerd[1988]: 2026-01-24 00:39:06.444 [INFO][5252] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:39:06.462031 containerd[1988]: 2026-01-24 00:39:06.454 [INFO][5238] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" Jan 24 00:39:06.469790 containerd[1988]: time="2026-01-24T00:39:06.462685090Z" level=info msg="TearDown network for sandbox \"1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17\" successfully" Jan 24 00:39:06.469790 containerd[1988]: time="2026-01-24T00:39:06.462718380Z" level=info msg="StopPodSandbox for \"1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17\" returns successfully" Jan 24 00:39:06.466193 systemd[1]: run-netns-cni\x2ddf116684\x2d5c16\x2dbc3b\x2df780\x2d2ae482417522.mount: Deactivated successfully. Jan 24 00:39:06.470911 containerd[1988]: time="2026-01-24T00:39:06.470869003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qnfl2,Uid:e1f50d23-3a90-4692-90b0-6d62e0594e46,Namespace:calico-system,Attempt:1,}" Jan 24 00:39:06.474779 systemd-networkd[1900]: calic917fff5863: Gained IPv6LL Jan 24 00:39:06.496179 containerd[1988]: 2026-01-24 00:39:06.362 [INFO][5234] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" Jan 24 00:39:06.496179 containerd[1988]: 2026-01-24 00:39:06.363 [INFO][5234] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" iface="eth0" netns="/var/run/netns/cni-932ba538-268b-1bf0-023e-8ae5fb996902" Jan 24 00:39:06.496179 containerd[1988]: 2026-01-24 00:39:06.364 [INFO][5234] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" iface="eth0" netns="/var/run/netns/cni-932ba538-268b-1bf0-023e-8ae5fb996902" Jan 24 00:39:06.496179 containerd[1988]: 2026-01-24 00:39:06.365 [INFO][5234] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" iface="eth0" netns="/var/run/netns/cni-932ba538-268b-1bf0-023e-8ae5fb996902" Jan 24 00:39:06.496179 containerd[1988]: 2026-01-24 00:39:06.365 [INFO][5234] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" Jan 24 00:39:06.496179 containerd[1988]: 2026-01-24 00:39:06.365 [INFO][5234] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" Jan 24 00:39:06.496179 containerd[1988]: 2026-01-24 00:39:06.445 [INFO][5254] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" HandleID="k8s-pod-network.23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" Workload="ip--172--31--23--37-k8s-calico--kube--controllers--6c5f78b9cf--nf2hx-eth0" Jan 24 00:39:06.496179 containerd[1988]: 2026-01-24 00:39:06.445 [INFO][5254] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:39:06.496179 containerd[1988]: 2026-01-24 00:39:06.445 [INFO][5254] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:39:06.496179 containerd[1988]: 2026-01-24 00:39:06.469 [WARNING][5254] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" HandleID="k8s-pod-network.23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" Workload="ip--172--31--23--37-k8s-calico--kube--controllers--6c5f78b9cf--nf2hx-eth0" Jan 24 00:39:06.496179 containerd[1988]: 2026-01-24 00:39:06.471 [INFO][5254] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" HandleID="k8s-pod-network.23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" Workload="ip--172--31--23--37-k8s-calico--kube--controllers--6c5f78b9cf--nf2hx-eth0" Jan 24 00:39:06.496179 containerd[1988]: 2026-01-24 00:39:06.481 [INFO][5254] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:39:06.496179 containerd[1988]: 2026-01-24 00:39:06.487 [INFO][5234] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" Jan 24 00:39:06.499614 containerd[1988]: time="2026-01-24T00:39:06.499574097Z" level=info msg="TearDown network for sandbox \"23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822\" successfully" Jan 24 00:39:06.499614 containerd[1988]: time="2026-01-24T00:39:06.499614624Z" level=info msg="StopPodSandbox for \"23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822\" returns successfully" Jan 24 00:39:06.502097 containerd[1988]: time="2026-01-24T00:39:06.502038483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c5f78b9cf-nf2hx,Uid:92126a9f-72bf-4007-b274-6c7bfe78315a,Namespace:calico-system,Attempt:1,}" Jan 24 00:39:06.511618 systemd[1]: run-netns-cni\x2d932ba538\x2d268b\x2d1bf0\x2d023e\x2d8ae5fb996902.mount: Deactivated successfully. Jan 24 00:39:06.708587 kubelet[3194]: E0124 00:39:06.706775 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8fb494d-tmnz4" podUID="559b3199-5162-436c-ae6f-2ec7000948df" Jan 24 00:39:06.769035 containerd[1988]: time="2026-01-24T00:39:06.768987282Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:39:06.771304 containerd[1988]: time="2026-01-24T00:39:06.770940891Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:39:06.771304 containerd[1988]: time="2026-01-24T00:39:06.771031970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:39:06.772569 kubelet[3194]: E0124 00:39:06.772252 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:39:06.772569 kubelet[3194]: E0124 00:39:06.772310 3194 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:39:06.826309 kubelet[3194]: E0124 00:39:06.772486 3194 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68sfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d8fb494d-phlb5_calico-apiserver(6c246b84-9265-4837-8997-3779f5365703): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:39:06.828903 kubelet[3194]: E0124 00:39:06.827588 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8fb494d-phlb5" podUID="6c246b84-9265-4837-8997-3779f5365703" Jan 24 00:39:06.927931 systemd-networkd[1900]: cali321b7c331af: Link UP Jan 24 00:39:06.930474 systemd-networkd[1900]: cali321b7c331af: Gained carrier Jan 24 00:39:06.976225 containerd[1988]: 2026-01-24 00:39:06.619 [INFO][5269] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:39:06.976225 containerd[1988]: 2026-01-24 00:39:06.664 [INFO][5269] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--37-k8s-goldmane--666569f655--qnfl2-eth0 goldmane-666569f655- calico-system e1f50d23-3a90-4692-90b0-6d62e0594e46 968 0 2026-01-24 00:38:39 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-23-37 goldmane-666569f655-qnfl2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali321b7c331af [] [] }} ContainerID="41d9f618d226327687c58b1379cb606b2b0a867af25d5b43f200de5cba868f82" Namespace="calico-system" Pod="goldmane-666569f655-qnfl2" WorkloadEndpoint="ip--172--31--23--37-k8s-goldmane--666569f655--qnfl2-" Jan 24 00:39:06.976225 containerd[1988]: 2026-01-24 00:39:06.664 [INFO][5269] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="41d9f618d226327687c58b1379cb606b2b0a867af25d5b43f200de5cba868f82" Namespace="calico-system" Pod="goldmane-666569f655-qnfl2" WorkloadEndpoint="ip--172--31--23--37-k8s-goldmane--666569f655--qnfl2-eth0" Jan 24 00:39:06.976225 containerd[1988]: 2026-01-24 00:39:06.748 [INFO][5294] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41d9f618d226327687c58b1379cb606b2b0a867af25d5b43f200de5cba868f82" HandleID="k8s-pod-network.41d9f618d226327687c58b1379cb606b2b0a867af25d5b43f200de5cba868f82" Workload="ip--172--31--23--37-k8s-goldmane--666569f655--qnfl2-eth0" Jan 24 00:39:06.976225 containerd[1988]: 2026-01-24 00:39:06.749 [INFO][5294] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="41d9f618d226327687c58b1379cb606b2b0a867af25d5b43f200de5cba868f82" HandleID="k8s-pod-network.41d9f618d226327687c58b1379cb606b2b0a867af25d5b43f200de5cba868f82" Workload="ip--172--31--23--37-k8s-goldmane--666569f655--qnfl2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038b750), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-37", "pod":"goldmane-666569f655-qnfl2", "timestamp":"2026-01-24 00:39:06.748777451 +0000 UTC"}, Hostname:"ip-172-31-23-37", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:39:06.976225 containerd[1988]: 2026-01-24 00:39:06.749 [INFO][5294] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:39:06.976225 containerd[1988]: 2026-01-24 00:39:06.749 [INFO][5294] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:39:06.976225 containerd[1988]: 2026-01-24 00:39:06.749 [INFO][5294] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-37' Jan 24 00:39:06.976225 containerd[1988]: 2026-01-24 00:39:06.775 [INFO][5294] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.41d9f618d226327687c58b1379cb606b2b0a867af25d5b43f200de5cba868f82" host="ip-172-31-23-37" Jan 24 00:39:06.976225 containerd[1988]: 2026-01-24 00:39:06.822 [INFO][5294] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-37" Jan 24 00:39:06.976225 containerd[1988]: 2026-01-24 00:39:06.850 [INFO][5294] ipam/ipam.go 511: Trying affinity for 192.168.114.0/26 host="ip-172-31-23-37" Jan 24 00:39:06.976225 containerd[1988]: 2026-01-24 00:39:06.857 [INFO][5294] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.0/26 host="ip-172-31-23-37" Jan 24 00:39:06.976225 containerd[1988]: 2026-01-24 00:39:06.867 [INFO][5294] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ip-172-31-23-37" Jan 24 00:39:06.976225 containerd[1988]: 2026-01-24 00:39:06.867 [INFO][5294] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.41d9f618d226327687c58b1379cb606b2b0a867af25d5b43f200de5cba868f82" host="ip-172-31-23-37" Jan 24 00:39:06.976225 containerd[1988]: 2026-01-24 00:39:06.869 [INFO][5294] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.41d9f618d226327687c58b1379cb606b2b0a867af25d5b43f200de5cba868f82 Jan 24 00:39:06.976225 containerd[1988]: 2026-01-24 00:39:06.886 [INFO][5294] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.41d9f618d226327687c58b1379cb606b2b0a867af25d5b43f200de5cba868f82" host="ip-172-31-23-37" Jan 24 00:39:06.976225 containerd[1988]: 2026-01-24 00:39:06.906 [INFO][5294] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.5/26] block=192.168.114.0/26 handle="k8s-pod-network.41d9f618d226327687c58b1379cb606b2b0a867af25d5b43f200de5cba868f82" host="ip-172-31-23-37" Jan 24 00:39:06.976225 containerd[1988]: 2026-01-24 00:39:06.906 [INFO][5294] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.5/26] handle="k8s-pod-network.41d9f618d226327687c58b1379cb606b2b0a867af25d5b43f200de5cba868f82" host="ip-172-31-23-37" Jan 24 00:39:06.976225 containerd[1988]: 2026-01-24 00:39:06.906 [INFO][5294] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:39:06.976225 containerd[1988]: 2026-01-24 00:39:06.906 [INFO][5294] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.5/26] IPv6=[] ContainerID="41d9f618d226327687c58b1379cb606b2b0a867af25d5b43f200de5cba868f82" HandleID="k8s-pod-network.41d9f618d226327687c58b1379cb606b2b0a867af25d5b43f200de5cba868f82" Workload="ip--172--31--23--37-k8s-goldmane--666569f655--qnfl2-eth0" Jan 24 00:39:06.978821 containerd[1988]: 2026-01-24 00:39:06.916 [INFO][5269] cni-plugin/k8s.go 418: Populated endpoint ContainerID="41d9f618d226327687c58b1379cb606b2b0a867af25d5b43f200de5cba868f82" Namespace="calico-system" Pod="goldmane-666569f655-qnfl2" WorkloadEndpoint="ip--172--31--23--37-k8s-goldmane--666569f655--qnfl2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--37-k8s-goldmane--666569f655--qnfl2-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e1f50d23-3a90-4692-90b0-6d62e0594e46", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 38, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-37", ContainerID:"", Pod:"goldmane-666569f655-qnfl2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali321b7c331af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:39:06.978821 containerd[1988]: 2026-01-24 00:39:06.916 [INFO][5269] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.5/32] ContainerID="41d9f618d226327687c58b1379cb606b2b0a867af25d5b43f200de5cba868f82" Namespace="calico-system" Pod="goldmane-666569f655-qnfl2" WorkloadEndpoint="ip--172--31--23--37-k8s-goldmane--666569f655--qnfl2-eth0" Jan 24 00:39:06.978821 containerd[1988]: 2026-01-24 00:39:06.916 [INFO][5269] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali321b7c331af ContainerID="41d9f618d226327687c58b1379cb606b2b0a867af25d5b43f200de5cba868f82" Namespace="calico-system" Pod="goldmane-666569f655-qnfl2" WorkloadEndpoint="ip--172--31--23--37-k8s-goldmane--666569f655--qnfl2-eth0" Jan 24 00:39:06.978821 containerd[1988]: 2026-01-24 00:39:06.929 [INFO][5269] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41d9f618d226327687c58b1379cb606b2b0a867af25d5b43f200de5cba868f82" Namespace="calico-system" Pod="goldmane-666569f655-qnfl2" WorkloadEndpoint="ip--172--31--23--37-k8s-goldmane--666569f655--qnfl2-eth0" Jan 24 00:39:06.978821 containerd[1988]: 2026-01-24 00:39:06.929 [INFO][5269] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="41d9f618d226327687c58b1379cb606b2b0a867af25d5b43f200de5cba868f82" Namespace="calico-system" Pod="goldmane-666569f655-qnfl2" WorkloadEndpoint="ip--172--31--23--37-k8s-goldmane--666569f655--qnfl2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--37-k8s-goldmane--666569f655--qnfl2-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e1f50d23-3a90-4692-90b0-6d62e0594e46", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 38, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-37", ContainerID:"41d9f618d226327687c58b1379cb606b2b0a867af25d5b43f200de5cba868f82", Pod:"goldmane-666569f655-qnfl2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali321b7c331af", MAC:"3a:cb:6a:89:f5:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:39:06.978821 containerd[1988]: 2026-01-24 00:39:06.972 [INFO][5269] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="41d9f618d226327687c58b1379cb606b2b0a867af25d5b43f200de5cba868f82" Namespace="calico-system" Pod="goldmane-666569f655-qnfl2" WorkloadEndpoint="ip--172--31--23--37-k8s-goldmane--666569f655--qnfl2-eth0" Jan 24 00:39:07.023345 containerd[1988]: time="2026-01-24T00:39:07.023186496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:39:07.023531 containerd[1988]: time="2026-01-24T00:39:07.023403535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:39:07.023531 containerd[1988]: time="2026-01-24T00:39:07.023491897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:07.023699 containerd[1988]: time="2026-01-24T00:39:07.023650376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:07.060646 systemd[1]: Started cri-containerd-41d9f618d226327687c58b1379cb606b2b0a867af25d5b43f200de5cba868f82.scope - libcontainer container 41d9f618d226327687c58b1379cb606b2b0a867af25d5b43f200de5cba868f82. Jan 24 00:39:07.120367 systemd-networkd[1900]: cali53f58e4a5cb: Link UP Jan 24 00:39:07.123841 systemd-networkd[1900]: cali53f58e4a5cb: Gained carrier Jan 24 00:39:07.166329 containerd[1988]: 2026-01-24 00:39:06.635 [INFO][5281] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:39:07.166329 containerd[1988]: 2026-01-24 00:39:06.668 [INFO][5281] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--37-k8s-calico--kube--controllers--6c5f78b9cf--nf2hx-eth0 calico-kube-controllers-6c5f78b9cf- calico-system 92126a9f-72bf-4007-b274-6c7bfe78315a 969 0 2026-01-24 00:38:41 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6c5f78b9cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-23-37 calico-kube-controllers-6c5f78b9cf-nf2hx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali53f58e4a5cb [] [] }} ContainerID="532c185fa822b3fa9ed86c381a947149ca738e4a779340908d017a6f87addb47" Namespace="calico-system" Pod="calico-kube-controllers-6c5f78b9cf-nf2hx" WorkloadEndpoint="ip--172--31--23--37-k8s-calico--kube--controllers--6c5f78b9cf--nf2hx-" Jan 24 00:39:07.166329 containerd[1988]: 2026-01-24 00:39:06.668 [INFO][5281] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="532c185fa822b3fa9ed86c381a947149ca738e4a779340908d017a6f87addb47" Namespace="calico-system" Pod="calico-kube-controllers-6c5f78b9cf-nf2hx" WorkloadEndpoint="ip--172--31--23--37-k8s-calico--kube--controllers--6c5f78b9cf--nf2hx-eth0" Jan 24 00:39:07.166329 containerd[1988]: 2026-01-24 00:39:06.798 [INFO][5296] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="532c185fa822b3fa9ed86c381a947149ca738e4a779340908d017a6f87addb47" HandleID="k8s-pod-network.532c185fa822b3fa9ed86c381a947149ca738e4a779340908d017a6f87addb47" Workload="ip--172--31--23--37-k8s-calico--kube--controllers--6c5f78b9cf--nf2hx-eth0" Jan 24 00:39:07.166329 containerd[1988]: 2026-01-24 00:39:06.799 [INFO][5296] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="532c185fa822b3fa9ed86c381a947149ca738e4a779340908d017a6f87addb47" HandleID="k8s-pod-network.532c185fa822b3fa9ed86c381a947149ca738e4a779340908d017a6f87addb47" Workload="ip--172--31--23--37-k8s-calico--kube--controllers--6c5f78b9cf--nf2hx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035c4b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-37", "pod":"calico-kube-controllers-6c5f78b9cf-nf2hx", "timestamp":"2026-01-24 00:39:06.79893155 +0000 UTC"}, Hostname:"ip-172-31-23-37", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:39:07.166329 containerd[1988]: 2026-01-24 00:39:06.799 [INFO][5296] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:39:07.166329 containerd[1988]: 2026-01-24 00:39:06.906 [INFO][5296] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:39:07.166329 containerd[1988]: 2026-01-24 00:39:06.907 [INFO][5296] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-37' Jan 24 00:39:07.166329 containerd[1988]: 2026-01-24 00:39:06.970 [INFO][5296] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.532c185fa822b3fa9ed86c381a947149ca738e4a779340908d017a6f87addb47" host="ip-172-31-23-37" Jan 24 00:39:07.166329 containerd[1988]: 2026-01-24 00:39:06.987 [INFO][5296] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-37" Jan 24 00:39:07.166329 containerd[1988]: 2026-01-24 00:39:06.996 [INFO][5296] ipam/ipam.go 511: Trying affinity for 192.168.114.0/26 host="ip-172-31-23-37" Jan 24 00:39:07.166329 containerd[1988]: 2026-01-24 00:39:07.003 [INFO][5296] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.0/26 host="ip-172-31-23-37" Jan 24 00:39:07.166329 containerd[1988]: 2026-01-24 00:39:07.011 [INFO][5296] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ip-172-31-23-37" Jan 24 00:39:07.166329 containerd[1988]: 2026-01-24 00:39:07.012 [INFO][5296] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.532c185fa822b3fa9ed86c381a947149ca738e4a779340908d017a6f87addb47" host="ip-172-31-23-37" Jan 24 00:39:07.166329 containerd[1988]: 2026-01-24 00:39:07.018 [INFO][5296] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.532c185fa822b3fa9ed86c381a947149ca738e4a779340908d017a6f87addb47 Jan 24 00:39:07.166329 containerd[1988]: 2026-01-24 00:39:07.033 [INFO][5296] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.532c185fa822b3fa9ed86c381a947149ca738e4a779340908d017a6f87addb47" host="ip-172-31-23-37" Jan 24 00:39:07.166329 containerd[1988]: 2026-01-24 00:39:07.058 [INFO][5296] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.6/26] block=192.168.114.0/26 handle="k8s-pod-network.532c185fa822b3fa9ed86c381a947149ca738e4a779340908d017a6f87addb47" host="ip-172-31-23-37" Jan 24 00:39:07.166329 containerd[1988]: 2026-01-24 00:39:07.058 [INFO][5296] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.6/26] handle="k8s-pod-network.532c185fa822b3fa9ed86c381a947149ca738e4a779340908d017a6f87addb47" host="ip-172-31-23-37" Jan 24 00:39:07.166329 containerd[1988]: 2026-01-24 00:39:07.058 [INFO][5296] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:39:07.166329 containerd[1988]: 2026-01-24 00:39:07.060 [INFO][5296] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.6/26] IPv6=[] ContainerID="532c185fa822b3fa9ed86c381a947149ca738e4a779340908d017a6f87addb47" HandleID="k8s-pod-network.532c185fa822b3fa9ed86c381a947149ca738e4a779340908d017a6f87addb47" Workload="ip--172--31--23--37-k8s-calico--kube--controllers--6c5f78b9cf--nf2hx-eth0" Jan 24 00:39:07.170907 containerd[1988]: 2026-01-24 00:39:07.072 [INFO][5281] cni-plugin/k8s.go 418: Populated endpoint ContainerID="532c185fa822b3fa9ed86c381a947149ca738e4a779340908d017a6f87addb47" Namespace="calico-system" Pod="calico-kube-controllers-6c5f78b9cf-nf2hx" WorkloadEndpoint="ip--172--31--23--37-k8s-calico--kube--controllers--6c5f78b9cf--nf2hx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--37-k8s-calico--kube--controllers--6c5f78b9cf--nf2hx-eth0", GenerateName:"calico-kube-controllers-6c5f78b9cf-", Namespace:"calico-system", SelfLink:"", UID:"92126a9f-72bf-4007-b274-6c7bfe78315a", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 38, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c5f78b9cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-37", ContainerID:"", Pod:"calico-kube-controllers-6c5f78b9cf-nf2hx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali53f58e4a5cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:39:07.170907 containerd[1988]: 2026-01-24 00:39:07.072 [INFO][5281] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.6/32] ContainerID="532c185fa822b3fa9ed86c381a947149ca738e4a779340908d017a6f87addb47" Namespace="calico-system" Pod="calico-kube-controllers-6c5f78b9cf-nf2hx" WorkloadEndpoint="ip--172--31--23--37-k8s-calico--kube--controllers--6c5f78b9cf--nf2hx-eth0" Jan 24 00:39:07.170907 containerd[1988]: 2026-01-24 00:39:07.072 [INFO][5281] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali53f58e4a5cb ContainerID="532c185fa822b3fa9ed86c381a947149ca738e4a779340908d017a6f87addb47" Namespace="calico-system" Pod="calico-kube-controllers-6c5f78b9cf-nf2hx" WorkloadEndpoint="ip--172--31--23--37-k8s-calico--kube--controllers--6c5f78b9cf--nf2hx-eth0" Jan 24 00:39:07.170907 containerd[1988]: 2026-01-24 00:39:07.127 [INFO][5281] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="532c185fa822b3fa9ed86c381a947149ca738e4a779340908d017a6f87addb47" Namespace="calico-system" Pod="calico-kube-controllers-6c5f78b9cf-nf2hx" WorkloadEndpoint="ip--172--31--23--37-k8s-calico--kube--controllers--6c5f78b9cf--nf2hx-eth0" Jan 24 00:39:07.170907 containerd[1988]: 2026-01-24 00:39:07.128 [INFO][5281] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="532c185fa822b3fa9ed86c381a947149ca738e4a779340908d017a6f87addb47" Namespace="calico-system" Pod="calico-kube-controllers-6c5f78b9cf-nf2hx" WorkloadEndpoint="ip--172--31--23--37-k8s-calico--kube--controllers--6c5f78b9cf--nf2hx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--37-k8s-calico--kube--controllers--6c5f78b9cf--nf2hx-eth0", GenerateName:"calico-kube-controllers-6c5f78b9cf-", Namespace:"calico-system", SelfLink:"", UID:"92126a9f-72bf-4007-b274-6c7bfe78315a", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 38, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c5f78b9cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-37", ContainerID:"532c185fa822b3fa9ed86c381a947149ca738e4a779340908d017a6f87addb47", Pod:"calico-kube-controllers-6c5f78b9cf-nf2hx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali53f58e4a5cb", MAC:"92:27:58:80:4e:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:39:07.170907 containerd[1988]: 2026-01-24 00:39:07.159 [INFO][5281] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="532c185fa822b3fa9ed86c381a947149ca738e4a779340908d017a6f87addb47" Namespace="calico-system" Pod="calico-kube-controllers-6c5f78b9cf-nf2hx" WorkloadEndpoint="ip--172--31--23--37-k8s-calico--kube--controllers--6c5f78b9cf--nf2hx-eth0" Jan 24 00:39:07.198309 containerd[1988]: time="2026-01-24T00:39:07.198013901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qnfl2,Uid:e1f50d23-3a90-4692-90b0-6d62e0594e46,Namespace:calico-system,Attempt:1,} returns sandbox id \"41d9f618d226327687c58b1379cb606b2b0a867af25d5b43f200de5cba868f82\"" Jan 24 00:39:07.207814 containerd[1988]: time="2026-01-24T00:39:07.207701769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:39:07.223079 containerd[1988]: time="2026-01-24T00:39:07.222631342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:39:07.224225 containerd[1988]: time="2026-01-24T00:39:07.223790365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:39:07.224225 containerd[1988]: time="2026-01-24T00:39:07.223820381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:07.224225 containerd[1988]: time="2026-01-24T00:39:07.223933653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:07.242547 systemd-networkd[1900]: cali4933b834dcd: Gained IPv6LL Jan 24 00:39:07.266995 systemd[1]: Started sshd@7-172.31.23.37:22-4.153.228.146:35234.service - OpenSSH per-connection server daemon (4.153.228.146:35234). Jan 24 00:39:07.269886 containerd[1988]: time="2026-01-24T00:39:07.268746143Z" level=info msg="StopPodSandbox for \"651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b\"" Jan 24 00:39:07.277728 containerd[1988]: time="2026-01-24T00:39:07.277675172Z" level=info msg="StopPodSandbox for \"dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e\"" Jan 24 00:39:07.384081 systemd[1]: run-containerd-runc-k8s.io-532c185fa822b3fa9ed86c381a947149ca738e4a779340908d017a6f87addb47-runc.kszQx2.mount: Deactivated successfully. Jan 24 00:39:07.401664 systemd[1]: Started cri-containerd-532c185fa822b3fa9ed86c381a947149ca738e4a779340908d017a6f87addb47.scope - libcontainer container 532c185fa822b3fa9ed86c381a947149ca738e4a779340908d017a6f87addb47. Jan 24 00:39:07.558602 containerd[1988]: time="2026-01-24T00:39:07.558512357Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:39:07.561568 containerd[1988]: time="2026-01-24T00:39:07.560942559Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:39:07.567970 containerd[1988]: time="2026-01-24T00:39:07.561003207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:39:07.568500 kubelet[3194]: E0124 00:39:07.567968 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:39:07.568500 kubelet[3194]: E0124 00:39:07.568024 3194 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:39:07.568500 kubelet[3194]: E0124 00:39:07.568221 3194 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j48zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qnfl2_calico-system(e1f50d23-3a90-4692-90b0-6d62e0594e46): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:39:07.570004 kubelet[3194]: E0124 00:39:07.569859 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qnfl2" podUID="e1f50d23-3a90-4692-90b0-6d62e0594e46" Jan 24 00:39:07.626592 systemd-networkd[1900]: cali7c6e33f9202: Gained IPv6LL Jan 24 00:39:07.667787 containerd[1988]: 2026-01-24 00:39:07.556 [INFO][5407] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" Jan 24 00:39:07.667787 containerd[1988]: 2026-01-24 00:39:07.557 [INFO][5407] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" iface="eth0" netns="/var/run/netns/cni-7ab4e47f-fc1a-5755-fee5-8f074c69d1ba" Jan 24 00:39:07.667787 containerd[1988]: 2026-01-24 00:39:07.558 [INFO][5407] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" iface="eth0" netns="/var/run/netns/cni-7ab4e47f-fc1a-5755-fee5-8f074c69d1ba" Jan 24 00:39:07.667787 containerd[1988]: 2026-01-24 00:39:07.559 [INFO][5407] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" iface="eth0" netns="/var/run/netns/cni-7ab4e47f-fc1a-5755-fee5-8f074c69d1ba" Jan 24 00:39:07.667787 containerd[1988]: 2026-01-24 00:39:07.559 [INFO][5407] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" Jan 24 00:39:07.667787 containerd[1988]: 2026-01-24 00:39:07.559 [INFO][5407] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" Jan 24 00:39:07.667787 containerd[1988]: 2026-01-24 00:39:07.629 [INFO][5448] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" HandleID="k8s-pod-network.651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" Workload="ip--172--31--23--37-k8s-csi--node--driver--g8z2m-eth0" Jan 24 00:39:07.667787 containerd[1988]: 2026-01-24 00:39:07.630 [INFO][5448] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:39:07.667787 containerd[1988]: 2026-01-24 00:39:07.630 [INFO][5448] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:39:07.667787 containerd[1988]: 2026-01-24 00:39:07.645 [WARNING][5448] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" HandleID="k8s-pod-network.651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" Workload="ip--172--31--23--37-k8s-csi--node--driver--g8z2m-eth0" Jan 24 00:39:07.667787 containerd[1988]: 2026-01-24 00:39:07.646 [INFO][5448] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" HandleID="k8s-pod-network.651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" Workload="ip--172--31--23--37-k8s-csi--node--driver--g8z2m-eth0" Jan 24 00:39:07.667787 containerd[1988]: 2026-01-24 00:39:07.653 [INFO][5448] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:39:07.667787 containerd[1988]: 2026-01-24 00:39:07.662 [INFO][5407] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" Jan 24 00:39:07.678824 containerd[1988]: time="2026-01-24T00:39:07.678543457Z" level=info msg="TearDown network for sandbox \"651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b\" successfully" Jan 24 00:39:07.678824 containerd[1988]: time="2026-01-24T00:39:07.678594688Z" level=info msg="StopPodSandbox for \"651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b\" returns successfully" Jan 24 00:39:07.684670 containerd[1988]: time="2026-01-24T00:39:07.681309019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g8z2m,Uid:08028277-ca96-466b-b85d-b33e87d62943,Namespace:calico-system,Attempt:1,}" Jan 24 00:39:07.681771 systemd[1]: run-netns-cni\x2d7ab4e47f\x2dfc1a\x2d5755\x2dfee5\x2d8f074c69d1ba.mount: Deactivated successfully. Jan 24 00:39:07.692886 containerd[1988]: time="2026-01-24T00:39:07.692692485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c5f78b9cf-nf2hx,Uid:92126a9f-72bf-4007-b274-6c7bfe78315a,Namespace:calico-system,Attempt:1,} returns sandbox id \"532c185fa822b3fa9ed86c381a947149ca738e4a779340908d017a6f87addb47\"" Jan 24 00:39:07.701360 containerd[1988]: time="2026-01-24T00:39:07.701299772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:39:07.740961 kubelet[3194]: E0124 00:39:07.740624 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8fb494d-tmnz4" podUID="559b3199-5162-436c-ae6f-2ec7000948df" Jan 24 00:39:07.740961 kubelet[3194]: E0124 00:39:07.740777 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8fb494d-phlb5" podUID="6c246b84-9265-4837-8997-3779f5365703" Jan 24 00:39:07.740961 kubelet[3194]: E0124 00:39:07.740859 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qnfl2" podUID="e1f50d23-3a90-4692-90b0-6d62e0594e46" Jan 24 00:39:07.816482 containerd[1988]: 2026-01-24 00:39:07.548 [INFO][5416] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" Jan 24 00:39:07.816482 containerd[1988]: 2026-01-24 00:39:07.549 [INFO][5416] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" iface="eth0" netns="/var/run/netns/cni-2bece218-3303-e716-fbcc-33d09ebc7e57" Jan 24 00:39:07.816482 containerd[1988]: 2026-01-24 00:39:07.549 [INFO][5416] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" iface="eth0" netns="/var/run/netns/cni-2bece218-3303-e716-fbcc-33d09ebc7e57" Jan 24 00:39:07.816482 containerd[1988]: 2026-01-24 00:39:07.550 [INFO][5416] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" iface="eth0" netns="/var/run/netns/cni-2bece218-3303-e716-fbcc-33d09ebc7e57" Jan 24 00:39:07.816482 containerd[1988]: 2026-01-24 00:39:07.550 [INFO][5416] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" Jan 24 00:39:07.816482 containerd[1988]: 2026-01-24 00:39:07.550 [INFO][5416] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" Jan 24 00:39:07.816482 containerd[1988]: 2026-01-24 00:39:07.693 [INFO][5446] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" HandleID="k8s-pod-network.dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" Workload="ip--172--31--23--37-k8s-coredns--668d6bf9bc--h7m4v-eth0" Jan 24 00:39:07.816482 containerd[1988]: 2026-01-24 00:39:07.699 [INFO][5446] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:39:07.816482 containerd[1988]: 2026-01-24 00:39:07.699 [INFO][5446] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:39:07.816482 containerd[1988]: 2026-01-24 00:39:07.780 [WARNING][5446] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" HandleID="k8s-pod-network.dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" Workload="ip--172--31--23--37-k8s-coredns--668d6bf9bc--h7m4v-eth0" Jan 24 00:39:07.816482 containerd[1988]: 2026-01-24 00:39:07.781 [INFO][5446] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" HandleID="k8s-pod-network.dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" Workload="ip--172--31--23--37-k8s-coredns--668d6bf9bc--h7m4v-eth0" Jan 24 00:39:07.816482 containerd[1988]: 2026-01-24 00:39:07.806 [INFO][5446] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:39:07.816482 containerd[1988]: 2026-01-24 00:39:07.811 [INFO][5416] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" Jan 24 00:39:07.824513 containerd[1988]: time="2026-01-24T00:39:07.824446004Z" level=info msg="TearDown network for sandbox \"dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e\" successfully" Jan 24 00:39:07.824713 containerd[1988]: time="2026-01-24T00:39:07.824689901Z" level=info msg="StopPodSandbox for \"dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e\" returns successfully" Jan 24 00:39:07.828091 containerd[1988]: time="2026-01-24T00:39:07.827847069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h7m4v,Uid:29f29deb-ec14-4cf7-a095-b62aa4c4a912,Namespace:kube-system,Attempt:1,}" Jan 24 00:39:07.896269 sshd[5389]: Accepted publickey for core from 4.153.228.146 port 35234 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:39:07.906919 sshd[5389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:39:07.925119 systemd-logind[1968]: New session 8 of user core. Jan 24 00:39:07.931641 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 00:39:08.032611 containerd[1988]: time="2026-01-24T00:39:08.030534835Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:39:08.034326 containerd[1988]: time="2026-01-24T00:39:08.034270903Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:39:08.034639 containerd[1988]: time="2026-01-24T00:39:08.034413423Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:39:08.035146 kubelet[3194]: E0124 00:39:08.035100 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:39:08.035839 kubelet[3194]: E0124 00:39:08.035760 3194 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:39:08.036440 kubelet[3194]: E0124 00:39:08.036338 3194 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6c56f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6c5f78b9cf-nf2hx_calico-system(92126a9f-72bf-4007-b274-6c7bfe78315a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:39:08.038804 kubelet[3194]: E0124 00:39:08.038765 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c5f78b9cf-nf2hx" podUID="92126a9f-72bf-4007-b274-6c7bfe78315a" Jan 24 00:39:08.074365 systemd-networkd[1900]: cali321b7c331af: Gained IPv6LL Jan 24 00:39:08.126403 systemd[1]: run-netns-cni\x2d2bece218\x2d3303\x2de716\x2dfbcc\x2d33d09ebc7e57.mount: Deactivated successfully. Jan 24 00:39:08.178845 systemd-networkd[1900]: calib71042c61e7: Link UP Jan 24 00:39:08.180560 systemd-networkd[1900]: calib71042c61e7: Gained carrier Jan 24 00:39:08.231520 containerd[1988]: 2026-01-24 00:39:07.978 [INFO][5484] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:39:08.231520 containerd[1988]: 2026-01-24 00:39:08.024 [INFO][5484] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--37-k8s-coredns--668d6bf9bc--h7m4v-eth0 coredns-668d6bf9bc- kube-system 29f29deb-ec14-4cf7-a095-b62aa4c4a912 1029 0 2026-01-24 00:38:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-37 coredns-668d6bf9bc-h7m4v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib71042c61e7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8bed8011c94eebeae25982d7a8fe53e755603c644ff068a8e086ac60f0b3f5fb" Namespace="kube-system" Pod="coredns-668d6bf9bc-h7m4v" WorkloadEndpoint="ip--172--31--23--37-k8s-coredns--668d6bf9bc--h7m4v-" Jan 24 00:39:08.231520 containerd[1988]: 2026-01-24 00:39:08.024 [INFO][5484] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8bed8011c94eebeae25982d7a8fe53e755603c644ff068a8e086ac60f0b3f5fb" Namespace="kube-system" Pod="coredns-668d6bf9bc-h7m4v" WorkloadEndpoint="ip--172--31--23--37-k8s-coredns--668d6bf9bc--h7m4v-eth0" Jan 24 00:39:08.231520 containerd[1988]: 2026-01-24 00:39:08.086 [INFO][5509] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8bed8011c94eebeae25982d7a8fe53e755603c644ff068a8e086ac60f0b3f5fb" HandleID="k8s-pod-network.8bed8011c94eebeae25982d7a8fe53e755603c644ff068a8e086ac60f0b3f5fb" Workload="ip--172--31--23--37-k8s-coredns--668d6bf9bc--h7m4v-eth0" Jan 24 00:39:08.231520 containerd[1988]: 2026-01-24 00:39:08.087 [INFO][5509] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8bed8011c94eebeae25982d7a8fe53e755603c644ff068a8e086ac60f0b3f5fb" HandleID="k8s-pod-network.8bed8011c94eebeae25982d7a8fe53e755603c644ff068a8e086ac60f0b3f5fb" Workload="ip--172--31--23--37-k8s-coredns--668d6bf9bc--h7m4v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f9c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-37", "pod":"coredns-668d6bf9bc-h7m4v", "timestamp":"2026-01-24 00:39:08.086153608 +0000 UTC"}, Hostname:"ip-172-31-23-37", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:39:08.231520 containerd[1988]: 2026-01-24 00:39:08.087 [INFO][5509] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:39:08.231520 containerd[1988]: 2026-01-24 00:39:08.087 [INFO][5509] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:39:08.231520 containerd[1988]: 2026-01-24 00:39:08.087 [INFO][5509] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-37' Jan 24 00:39:08.231520 containerd[1988]: 2026-01-24 00:39:08.099 [INFO][5509] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8bed8011c94eebeae25982d7a8fe53e755603c644ff068a8e086ac60f0b3f5fb" host="ip-172-31-23-37" Jan 24 00:39:08.231520 containerd[1988]: 2026-01-24 00:39:08.114 [INFO][5509] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-37" Jan 24 00:39:08.231520 containerd[1988]: 2026-01-24 00:39:08.135 [INFO][5509] ipam/ipam.go 511: Trying affinity for 192.168.114.0/26 host="ip-172-31-23-37" Jan 24 00:39:08.231520 containerd[1988]: 2026-01-24 00:39:08.140 [INFO][5509] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.0/26 host="ip-172-31-23-37" Jan 24 00:39:08.231520 containerd[1988]: 2026-01-24 00:39:08.143 [INFO][5509] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ip-172-31-23-37" Jan 24 00:39:08.231520 containerd[1988]: 2026-01-24 00:39:08.144 [INFO][5509] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.8bed8011c94eebeae25982d7a8fe53e755603c644ff068a8e086ac60f0b3f5fb" host="ip-172-31-23-37" Jan 24 00:39:08.231520 containerd[1988]: 2026-01-24 00:39:08.147 [INFO][5509] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8bed8011c94eebeae25982d7a8fe53e755603c644ff068a8e086ac60f0b3f5fb Jan 24 00:39:08.231520 containerd[1988]: 2026-01-24 00:39:08.156 [INFO][5509] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.8bed8011c94eebeae25982d7a8fe53e755603c644ff068a8e086ac60f0b3f5fb" host="ip-172-31-23-37" Jan 24 00:39:08.231520 containerd[1988]: 2026-01-24 00:39:08.170 [INFO][5509] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.7/26] block=192.168.114.0/26 handle="k8s-pod-network.8bed8011c94eebeae25982d7a8fe53e755603c644ff068a8e086ac60f0b3f5fb" host="ip-172-31-23-37" Jan 24 00:39:08.231520 containerd[1988]: 2026-01-24 00:39:08.170 [INFO][5509] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.7/26] handle="k8s-pod-network.8bed8011c94eebeae25982d7a8fe53e755603c644ff068a8e086ac60f0b3f5fb" host="ip-172-31-23-37" Jan 24 00:39:08.231520 containerd[1988]: 2026-01-24 00:39:08.170 [INFO][5509] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:39:08.231520 containerd[1988]: 2026-01-24 00:39:08.170 [INFO][5509] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.7/26] IPv6=[] ContainerID="8bed8011c94eebeae25982d7a8fe53e755603c644ff068a8e086ac60f0b3f5fb" HandleID="k8s-pod-network.8bed8011c94eebeae25982d7a8fe53e755603c644ff068a8e086ac60f0b3f5fb" Workload="ip--172--31--23--37-k8s-coredns--668d6bf9bc--h7m4v-eth0" Jan 24 00:39:08.235411 containerd[1988]: 2026-01-24 00:39:08.173 [INFO][5484] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8bed8011c94eebeae25982d7a8fe53e755603c644ff068a8e086ac60f0b3f5fb" Namespace="kube-system" Pod="coredns-668d6bf9bc-h7m4v" WorkloadEndpoint="ip--172--31--23--37-k8s-coredns--668d6bf9bc--h7m4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--37-k8s-coredns--668d6bf9bc--h7m4v-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"29f29deb-ec14-4cf7-a095-b62aa4c4a912", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 38, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-37", ContainerID:"", Pod:"coredns-668d6bf9bc-h7m4v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib71042c61e7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:39:08.235411 containerd[1988]: 2026-01-24 00:39:08.174 [INFO][5484] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.7/32] ContainerID="8bed8011c94eebeae25982d7a8fe53e755603c644ff068a8e086ac60f0b3f5fb" Namespace="kube-system" Pod="coredns-668d6bf9bc-h7m4v" WorkloadEndpoint="ip--172--31--23--37-k8s-coredns--668d6bf9bc--h7m4v-eth0" Jan 24 00:39:08.235411 containerd[1988]: 2026-01-24 00:39:08.174 [INFO][5484] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib71042c61e7 ContainerID="8bed8011c94eebeae25982d7a8fe53e755603c644ff068a8e086ac60f0b3f5fb" Namespace="kube-system" Pod="coredns-668d6bf9bc-h7m4v" WorkloadEndpoint="ip--172--31--23--37-k8s-coredns--668d6bf9bc--h7m4v-eth0" Jan 24 00:39:08.235411 containerd[1988]: 2026-01-24 00:39:08.182 [INFO][5484] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8bed8011c94eebeae25982d7a8fe53e755603c644ff068a8e086ac60f0b3f5fb" Namespace="kube-system" Pod="coredns-668d6bf9bc-h7m4v" WorkloadEndpoint="ip--172--31--23--37-k8s-coredns--668d6bf9bc--h7m4v-eth0" Jan 24 00:39:08.235411 containerd[1988]: 2026-01-24 00:39:08.183 [INFO][5484] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8bed8011c94eebeae25982d7a8fe53e755603c644ff068a8e086ac60f0b3f5fb" Namespace="kube-system" Pod="coredns-668d6bf9bc-h7m4v" WorkloadEndpoint="ip--172--31--23--37-k8s-coredns--668d6bf9bc--h7m4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--37-k8s-coredns--668d6bf9bc--h7m4v-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"29f29deb-ec14-4cf7-a095-b62aa4c4a912", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 38, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-37", ContainerID:"8bed8011c94eebeae25982d7a8fe53e755603c644ff068a8e086ac60f0b3f5fb", Pod:"coredns-668d6bf9bc-h7m4v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib71042c61e7", MAC:"ee:72:60:79:43:ec", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:39:08.235411 containerd[1988]: 2026-01-24 00:39:08.227 [INFO][5484] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8bed8011c94eebeae25982d7a8fe53e755603c644ff068a8e086ac60f0b3f5fb" Namespace="kube-system" Pod="coredns-668d6bf9bc-h7m4v" WorkloadEndpoint="ip--172--31--23--37-k8s-coredns--668d6bf9bc--h7m4v-eth0" Jan 24 00:39:08.295959 containerd[1988]: time="2026-01-24T00:39:08.292794087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:39:08.295959 containerd[1988]: time="2026-01-24T00:39:08.294788975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:39:08.296983 containerd[1988]: time="2026-01-24T00:39:08.296158328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:08.297432 containerd[1988]: time="2026-01-24T00:39:08.297231966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:08.363520 systemd[1]: Started cri-containerd-8bed8011c94eebeae25982d7a8fe53e755603c644ff068a8e086ac60f0b3f5fb.scope - libcontainer container 8bed8011c94eebeae25982d7a8fe53e755603c644ff068a8e086ac60f0b3f5fb. Jan 24 00:39:08.402794 systemd-networkd[1900]: calic80f90f8826: Link UP Jan 24 00:39:08.405044 systemd-networkd[1900]: calic80f90f8826: Gained carrier Jan 24 00:39:08.509241 containerd[1988]: 2026-01-24 00:39:07.904 [INFO][5475] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:39:08.509241 containerd[1988]: 2026-01-24 00:39:07.962 [INFO][5475] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--37-k8s-csi--node--driver--g8z2m-eth0 csi-node-driver- calico-system 08028277-ca96-466b-b85d-b33e87d62943 1030 0 2026-01-24 00:38:41 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-23-37 csi-node-driver-g8z2m eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic80f90f8826 [] [] }} ContainerID="313c38bae171ad834a4027c723ad907c4c837ec4201eaa02ee844e006b853f08" Namespace="calico-system" Pod="csi-node-driver-g8z2m" WorkloadEndpoint="ip--172--31--23--37-k8s-csi--node--driver--g8z2m-" Jan 24 00:39:08.509241 containerd[1988]: 2026-01-24 00:39:07.962 [INFO][5475] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="313c38bae171ad834a4027c723ad907c4c837ec4201eaa02ee844e006b853f08" Namespace="calico-system" Pod="csi-node-driver-g8z2m" WorkloadEndpoint="ip--172--31--23--37-k8s-csi--node--driver--g8z2m-eth0" Jan 24 00:39:08.509241 containerd[1988]: 2026-01-24 00:39:08.110 [INFO][5503] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="313c38bae171ad834a4027c723ad907c4c837ec4201eaa02ee844e006b853f08" HandleID="k8s-pod-network.313c38bae171ad834a4027c723ad907c4c837ec4201eaa02ee844e006b853f08" Workload="ip--172--31--23--37-k8s-csi--node--driver--g8z2m-eth0" Jan 24 00:39:08.509241 containerd[1988]: 2026-01-24 00:39:08.113 [INFO][5503] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="313c38bae171ad834a4027c723ad907c4c837ec4201eaa02ee844e006b853f08" HandleID="k8s-pod-network.313c38bae171ad834a4027c723ad907c4c837ec4201eaa02ee844e006b853f08" Workload="ip--172--31--23--37-k8s-csi--node--driver--g8z2m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000347970), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-37", "pod":"csi-node-driver-g8z2m", "timestamp":"2026-01-24 00:39:08.110620292 +0000 UTC"}, Hostname:"ip-172-31-23-37", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:39:08.509241 containerd[1988]: 2026-01-24 00:39:08.114 [INFO][5503] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:39:08.509241 containerd[1988]: 2026-01-24 00:39:08.170 [INFO][5503] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:39:08.509241 containerd[1988]: 2026-01-24 00:39:08.171 [INFO][5503] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-37' Jan 24 00:39:08.509241 containerd[1988]: 2026-01-24 00:39:08.215 [INFO][5503] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.313c38bae171ad834a4027c723ad907c4c837ec4201eaa02ee844e006b853f08" host="ip-172-31-23-37" Jan 24 00:39:08.509241 containerd[1988]: 2026-01-24 00:39:08.241 [INFO][5503] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-37" Jan 24 00:39:08.509241 containerd[1988]: 2026-01-24 00:39:08.267 [INFO][5503] ipam/ipam.go 511: Trying affinity for 192.168.114.0/26 host="ip-172-31-23-37" Jan 24 00:39:08.509241 containerd[1988]: 2026-01-24 00:39:08.271 [INFO][5503] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.0/26 host="ip-172-31-23-37" Jan 24 00:39:08.509241 containerd[1988]: 2026-01-24 00:39:08.281 [INFO][5503] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ip-172-31-23-37" Jan 24 00:39:08.509241 containerd[1988]: 2026-01-24 00:39:08.282 [INFO][5503] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.313c38bae171ad834a4027c723ad907c4c837ec4201eaa02ee844e006b853f08" host="ip-172-31-23-37" Jan 24 00:39:08.509241 containerd[1988]: 2026-01-24 00:39:08.286 [INFO][5503] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.313c38bae171ad834a4027c723ad907c4c837ec4201eaa02ee844e006b853f08 Jan 24 00:39:08.509241 containerd[1988]: 2026-01-24 00:39:08.316 [INFO][5503] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.313c38bae171ad834a4027c723ad907c4c837ec4201eaa02ee844e006b853f08" host="ip-172-31-23-37" Jan 24 00:39:08.509241 containerd[1988]: 2026-01-24 00:39:08.386 [INFO][5503] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.114.8/26] block=192.168.114.0/26 handle="k8s-pod-network.313c38bae171ad834a4027c723ad907c4c837ec4201eaa02ee844e006b853f08" host="ip-172-31-23-37" Jan 24 00:39:08.509241 containerd[1988]: 2026-01-24 00:39:08.386 [INFO][5503] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.8/26] handle="k8s-pod-network.313c38bae171ad834a4027c723ad907c4c837ec4201eaa02ee844e006b853f08" host="ip-172-31-23-37" Jan 24 00:39:08.509241 containerd[1988]: 2026-01-24 00:39:08.386 [INFO][5503] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:39:08.509241 containerd[1988]: 2026-01-24 00:39:08.386 [INFO][5503] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.114.8/26] IPv6=[] ContainerID="313c38bae171ad834a4027c723ad907c4c837ec4201eaa02ee844e006b853f08" HandleID="k8s-pod-network.313c38bae171ad834a4027c723ad907c4c837ec4201eaa02ee844e006b853f08" Workload="ip--172--31--23--37-k8s-csi--node--driver--g8z2m-eth0" Jan 24 00:39:08.516011 containerd[1988]: 2026-01-24 00:39:08.391 [INFO][5475] cni-plugin/k8s.go 418: Populated endpoint ContainerID="313c38bae171ad834a4027c723ad907c4c837ec4201eaa02ee844e006b853f08" Namespace="calico-system" Pod="csi-node-driver-g8z2m" WorkloadEndpoint="ip--172--31--23--37-k8s-csi--node--driver--g8z2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--37-k8s-csi--node--driver--g8z2m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"08028277-ca96-466b-b85d-b33e87d62943", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 38, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-37", ContainerID:"", Pod:"csi-node-driver-g8z2m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic80f90f8826", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:39:08.516011 containerd[1988]: 2026-01-24 00:39:08.392 [INFO][5475] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.8/32] ContainerID="313c38bae171ad834a4027c723ad907c4c837ec4201eaa02ee844e006b853f08" Namespace="calico-system" Pod="csi-node-driver-g8z2m" WorkloadEndpoint="ip--172--31--23--37-k8s-csi--node--driver--g8z2m-eth0" Jan 24 00:39:08.516011 containerd[1988]: 2026-01-24 00:39:08.392 [INFO][5475] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic80f90f8826 ContainerID="313c38bae171ad834a4027c723ad907c4c837ec4201eaa02ee844e006b853f08" Namespace="calico-system" Pod="csi-node-driver-g8z2m" WorkloadEndpoint="ip--172--31--23--37-k8s-csi--node--driver--g8z2m-eth0" Jan 24 00:39:08.516011 containerd[1988]: 2026-01-24 00:39:08.408 [INFO][5475] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="313c38bae171ad834a4027c723ad907c4c837ec4201eaa02ee844e006b853f08" Namespace="calico-system" Pod="csi-node-driver-g8z2m" WorkloadEndpoint="ip--172--31--23--37-k8s-csi--node--driver--g8z2m-eth0" Jan 24 00:39:08.516011 containerd[1988]: 2026-01-24 00:39:08.412 [INFO][5475] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="313c38bae171ad834a4027c723ad907c4c837ec4201eaa02ee844e006b853f08" Namespace="calico-system" Pod="csi-node-driver-g8z2m" WorkloadEndpoint="ip--172--31--23--37-k8s-csi--node--driver--g8z2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--37-k8s-csi--node--driver--g8z2m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"08028277-ca96-466b-b85d-b33e87d62943", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 38, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-37", ContainerID:"313c38bae171ad834a4027c723ad907c4c837ec4201eaa02ee844e006b853f08", Pod:"csi-node-driver-g8z2m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic80f90f8826", MAC:"d6:4c:45:22:a5:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:39:08.516011 containerd[1988]: 2026-01-24 00:39:08.502 [INFO][5475] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="313c38bae171ad834a4027c723ad907c4c837ec4201eaa02ee844e006b853f08" Namespace="calico-system" Pod="csi-node-driver-g8z2m" WorkloadEndpoint="ip--172--31--23--37-k8s-csi--node--driver--g8z2m-eth0" Jan 24 00:39:08.558128 containerd[1988]: time="2026-01-24T00:39:08.557784055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:39:08.558128 containerd[1988]: time="2026-01-24T00:39:08.557875246Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:39:08.558128 containerd[1988]: time="2026-01-24T00:39:08.557899556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:08.558128 containerd[1988]: time="2026-01-24T00:39:08.558016746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:39:08.621648 systemd[1]: Started cri-containerd-313c38bae171ad834a4027c723ad907c4c837ec4201eaa02ee844e006b853f08.scope - libcontainer container 313c38bae171ad834a4027c723ad907c4c837ec4201eaa02ee844e006b853f08. Jan 24 00:39:08.681941 containerd[1988]: time="2026-01-24T00:39:08.681893565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h7m4v,Uid:29f29deb-ec14-4cf7-a095-b62aa4c4a912,Namespace:kube-system,Attempt:1,} returns sandbox id \"8bed8011c94eebeae25982d7a8fe53e755603c644ff068a8e086ac60f0b3f5fb\"" Jan 24 00:39:08.690211 containerd[1988]: time="2026-01-24T00:39:08.689783103Z" level=info msg="CreateContainer within sandbox \"8bed8011c94eebeae25982d7a8fe53e755603c644ff068a8e086ac60f0b3f5fb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:39:08.714003 systemd-networkd[1900]: cali53f58e4a5cb: Gained IPv6LL Jan 24 00:39:08.718796 containerd[1988]: time="2026-01-24T00:39:08.717634768Z" level=info msg="CreateContainer within sandbox \"8bed8011c94eebeae25982d7a8fe53e755603c644ff068a8e086ac60f0b3f5fb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b334255462b8cb5955436d64662819ac67b5df9c9bfb96a24c2d7b8edd5a303f\"" Jan 24 00:39:08.720521 containerd[1988]: time="2026-01-24T00:39:08.719034319Z" level=info msg="StartContainer for \"b334255462b8cb5955436d64662819ac67b5df9c9bfb96a24c2d7b8edd5a303f\"" Jan 24 00:39:08.754917 kubelet[3194]: E0124 00:39:08.754871 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c5f78b9cf-nf2hx" podUID="92126a9f-72bf-4007-b274-6c7bfe78315a" Jan 24 00:39:08.759970 kubelet[3194]: E0124 00:39:08.759916 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qnfl2" podUID="e1f50d23-3a90-4692-90b0-6d62e0594e46" Jan 24 00:39:08.792134 systemd[1]: Started cri-containerd-b334255462b8cb5955436d64662819ac67b5df9c9bfb96a24c2d7b8edd5a303f.scope - libcontainer container b334255462b8cb5955436d64662819ac67b5df9c9bfb96a24c2d7b8edd5a303f. Jan 24 00:39:08.941035 containerd[1988]: time="2026-01-24T00:39:08.940982186Z" level=info msg="StartContainer for \"b334255462b8cb5955436d64662819ac67b5df9c9bfb96a24c2d7b8edd5a303f\" returns successfully" Jan 24 00:39:09.040733 containerd[1988]: time="2026-01-24T00:39:09.040592245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g8z2m,Uid:08028277-ca96-466b-b85d-b33e87d62943,Namespace:calico-system,Attempt:1,} returns sandbox id \"313c38bae171ad834a4027c723ad907c4c837ec4201eaa02ee844e006b853f08\"" Jan 24 00:39:09.047018 containerd[1988]: time="2026-01-24T00:39:09.046557284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:39:09.310981 sshd[5389]: pam_unix(sshd:session): session closed for user core Jan 24 00:39:09.320348 systemd[1]: sshd@7-172.31.23.37:22-4.153.228.146:35234.service: Deactivated successfully. Jan 24 00:39:09.326844 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 00:39:09.333297 systemd-logind[1968]: Session 8 logged out. Waiting for processes to exit. Jan 24 00:39:09.335671 systemd-logind[1968]: Removed session 8. Jan 24 00:39:09.355204 systemd-networkd[1900]: calib71042c61e7: Gained IPv6LL Jan 24 00:39:09.439897 containerd[1988]: time="2026-01-24T00:39:09.439847558Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:39:09.441612 containerd[1988]: time="2026-01-24T00:39:09.441517786Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:39:09.441754 containerd[1988]: time="2026-01-24T00:39:09.441582372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:39:09.443398 kubelet[3194]: E0124 00:39:09.441980 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:39:09.443398 kubelet[3194]: E0124 00:39:09.442043 3194 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:39:09.443398 kubelet[3194]: E0124 00:39:09.442839 3194 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68lqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-g8z2m_calico-system(08028277-ca96-466b-b85d-b33e87d62943): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:39:09.445144 containerd[1988]: time="2026-01-24T00:39:09.445110036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:39:09.569410 kernel: bpftool[5699]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 24 00:39:09.741780 containerd[1988]: time="2026-01-24T00:39:09.741713842Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:39:09.743341 containerd[1988]: time="2026-01-24T00:39:09.743286111Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:39:09.743481 containerd[1988]: time="2026-01-24T00:39:09.743321347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:39:09.743623 kubelet[3194]: E0124 00:39:09.743579 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:39:09.743691 kubelet[3194]: E0124 00:39:09.743637 3194 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:39:09.743837 kubelet[3194]: E0124 00:39:09.743786 3194 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68lqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-g8z2m_calico-system(08028277-ca96-466b-b85d-b33e87d62943): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:39:09.745308 kubelet[3194]: E0124 00:39:09.745261 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-g8z2m" podUID="08028277-ca96-466b-b85d-b33e87d62943" Jan 24 00:39:09.762307 kubelet[3194]: E0124 00:39:09.762216 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-g8z2m" podUID="08028277-ca96-466b-b85d-b33e87d62943" Jan 24 00:39:09.944567 systemd-networkd[1900]: vxlan.calico: Link UP Jan 24 00:39:09.944577 systemd-networkd[1900]: vxlan.calico: Gained carrier Jan 24 00:39:09.994173 (udev-worker)[4939]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:39:10.057589 systemd-networkd[1900]: calic80f90f8826: Gained IPv6LL Jan 24 00:39:10.767163 kubelet[3194]: E0124 00:39:10.767106 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-g8z2m" podUID="08028277-ca96-466b-b85d-b33e87d62943" Jan 24 00:39:10.805063 kubelet[3194]: I0124 00:39:10.804238 3194 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-h7m4v" podStartSLOduration=46.789936772 podStartE2EDuration="46.789936772s" podCreationTimestamp="2026-01-24 00:38:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:39:09.800160298 +0000 UTC m=+50.721207085" watchObservedRunningTime="2026-01-24 00:39:10.789936772 +0000 UTC m=+51.710983560" Jan 24 00:39:11.723865 systemd-networkd[1900]: vxlan.calico: Gained IPv6LL Jan 24 00:39:14.257556 containerd[1988]: time="2026-01-24T00:39:14.257162109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:39:14.385796 ntpd[1963]: Listen normally on 7 vxlan.calico 192.168.114.0:123 Jan 24 00:39:14.385891 ntpd[1963]: Listen normally on 8 cali72282eb4efe [fe80::ecee:eeff:feee:eeee%4]:123 Jan 24 00:39:14.387673 ntpd[1963]: 24 Jan 00:39:14 ntpd[1963]: Listen normally on 7 vxlan.calico 192.168.114.0:123 Jan 24 00:39:14.387673 ntpd[1963]: 24 Jan 00:39:14 ntpd[1963]: Listen normally on 8 cali72282eb4efe [fe80::ecee:eeff:feee:eeee%4]:123 Jan 24 00:39:14.387673 ntpd[1963]: 24 Jan 00:39:14 ntpd[1963]: Listen normally on 9 calic917fff5863 [fe80::ecee:eeff:feee:eeee%5]:123 Jan 24 00:39:14.387673 ntpd[1963]: 24 Jan 00:39:14 ntpd[1963]: Listen normally on 10 cali4933b834dcd [fe80::ecee:eeff:feee:eeee%6]:123 Jan 24 00:39:14.387673 ntpd[1963]: 24 Jan 00:39:14 ntpd[1963]: Listen normally on 11 cali7c6e33f9202 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 24 00:39:14.387673 ntpd[1963]: 24 Jan 00:39:14 ntpd[1963]: Listen normally on 12 cali321b7c331af [fe80::ecee:eeff:feee:eeee%8]:123 Jan 24 00:39:14.387673 ntpd[1963]: 24 Jan 00:39:14 ntpd[1963]: Listen normally on 13 cali53f58e4a5cb [fe80::ecee:eeff:feee:eeee%9]:123 Jan 24 00:39:14.387673 ntpd[1963]: 24 Jan 00:39:14 ntpd[1963]: Listen normally on 14 calib71042c61e7 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 24 00:39:14.387673 ntpd[1963]: 24 Jan 00:39:14 ntpd[1963]: Listen normally on 15 calic80f90f8826 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 24 00:39:14.387673 ntpd[1963]: 24 Jan 00:39:14 ntpd[1963]: Listen normally on 16 vxlan.calico [fe80::6454:ff:fe9c:8e79%12]:123 Jan 24 00:39:14.385938 ntpd[1963]: Listen normally on 9 calic917fff5863 [fe80::ecee:eeff:feee:eeee%5]:123 Jan 24 00:39:14.385969 ntpd[1963]: Listen normally on 10 cali4933b834dcd [fe80::ecee:eeff:feee:eeee%6]:123 Jan 24 00:39:14.386000 ntpd[1963]: Listen normally on 11 cali7c6e33f9202 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 24 00:39:14.386028 ntpd[1963]: Listen normally on 12 cali321b7c331af [fe80::ecee:eeff:feee:eeee%8]:123 Jan 24 00:39:14.386057 ntpd[1963]: Listen normally on 13 cali53f58e4a5cb [fe80::ecee:eeff:feee:eeee%9]:123 Jan 24 00:39:14.386101 ntpd[1963]: Listen normally on 14 calib71042c61e7 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 24 00:39:14.386130 ntpd[1963]: Listen normally on 15 calic80f90f8826 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 24 00:39:14.386162 ntpd[1963]: Listen normally on 16 vxlan.calico [fe80::6454:ff:fe9c:8e79%12]:123 Jan 24 00:39:14.393629 systemd[1]: Started sshd@8-172.31.23.37:22-4.153.228.146:35250.service - OpenSSH per-connection server daemon (4.153.228.146:35250). Jan 24 00:39:14.570648 containerd[1988]: time="2026-01-24T00:39:14.570471346Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:39:14.572085 containerd[1988]: time="2026-01-24T00:39:14.571983460Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:39:14.572085 containerd[1988]: time="2026-01-24T00:39:14.572026924Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:39:14.572299 kubelet[3194]: E0124 00:39:14.572230 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:39:14.572299 kubelet[3194]: E0124 00:39:14.572286 3194 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:39:14.572885 kubelet[3194]: E0124 00:39:14.572438 3194 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a61c7262178c49c787cf179bd2771f88,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-klmv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-764954c6fc-ns4t5_calico-system(e59d88b0-80a3-4d3a-8f96-ae389146720c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:39:14.575947 containerd[1988]: time="2026-01-24T00:39:14.575601513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:39:14.847597 containerd[1988]: time="2026-01-24T00:39:14.847439663Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:39:14.848968 containerd[1988]: time="2026-01-24T00:39:14.848701445Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:39:14.848968 containerd[1988]: time="2026-01-24T00:39:14.848921564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:39:14.849270 kubelet[3194]: E0124 00:39:14.849225 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:39:14.849342 kubelet[3194]: E0124 00:39:14.849280 3194 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:39:14.849451 kubelet[3194]: E0124 00:39:14.849412 3194 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-klmv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-764954c6fc-ns4t5_calico-system(e59d88b0-80a3-4d3a-8f96-ae389146720c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:39:14.850977 kubelet[3194]: E0124 00:39:14.850897 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764954c6fc-ns4t5" podUID="e59d88b0-80a3-4d3a-8f96-ae389146720c" Jan 24 00:39:14.914906 sshd[5788]: Accepted publickey for core from 4.153.228.146 port 35250 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:39:14.916731 sshd[5788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:39:14.922266 systemd-logind[1968]: New session 9 of user core. Jan 24 00:39:14.927694 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 00:39:15.423779 sshd[5788]: pam_unix(sshd:session): session closed for user core Jan 24 00:39:15.427991 systemd-logind[1968]: Session 9 logged out. Waiting for processes to exit. Jan 24 00:39:15.429153 systemd[1]: sshd@8-172.31.23.37:22-4.153.228.146:35250.service: Deactivated successfully. Jan 24 00:39:15.432234 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 00:39:15.433490 systemd-logind[1968]: Removed session 9. Jan 24 00:39:18.266573 containerd[1988]: time="2026-01-24T00:39:18.266519907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:39:18.556566 containerd[1988]: time="2026-01-24T00:39:18.556412903Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:39:18.558071 containerd[1988]: time="2026-01-24T00:39:18.557999673Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:39:18.558194 containerd[1988]: time="2026-01-24T00:39:18.558102535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:39:18.558326 kubelet[3194]: E0124 00:39:18.558255 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:39:18.558326 kubelet[3194]: E0124 00:39:18.558305 3194 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:39:18.558768 kubelet[3194]: E0124 00:39:18.558440 3194 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68sfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d8fb494d-phlb5_calico-apiserver(6c246b84-9265-4837-8997-3779f5365703): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:39:18.559946 kubelet[3194]: E0124 00:39:18.559888 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8fb494d-phlb5" podUID="6c246b84-9265-4837-8997-3779f5365703" Jan 24 00:39:19.270848 containerd[1988]: time="2026-01-24T00:39:19.270146264Z" level=info msg="StopPodSandbox for \"651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b\"" Jan 24 00:39:19.353136 containerd[1988]: 2026-01-24 00:39:19.313 [WARNING][5825] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--37-k8s-csi--node--driver--g8z2m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"08028277-ca96-466b-b85d-b33e87d62943", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 38, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-37", ContainerID:"313c38bae171ad834a4027c723ad907c4c837ec4201eaa02ee844e006b853f08", Pod:"csi-node-driver-g8z2m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic80f90f8826", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:39:19.353136 containerd[1988]: 2026-01-24 00:39:19.314 [INFO][5825] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" Jan 24 00:39:19.353136 containerd[1988]: 2026-01-24 00:39:19.314 [INFO][5825] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" iface="eth0" netns="" Jan 24 00:39:19.353136 containerd[1988]: 2026-01-24 00:39:19.314 [INFO][5825] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" Jan 24 00:39:19.353136 containerd[1988]: 2026-01-24 00:39:19.314 [INFO][5825] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" Jan 24 00:39:19.353136 containerd[1988]: 2026-01-24 00:39:19.339 [INFO][5833] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" HandleID="k8s-pod-network.651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" Workload="ip--172--31--23--37-k8s-csi--node--driver--g8z2m-eth0" Jan 24 00:39:19.353136 containerd[1988]: 2026-01-24 00:39:19.339 [INFO][5833] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:39:19.353136 containerd[1988]: 2026-01-24 00:39:19.339 [INFO][5833] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:39:19.353136 containerd[1988]: 2026-01-24 00:39:19.345 [WARNING][5833] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" HandleID="k8s-pod-network.651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" Workload="ip--172--31--23--37-k8s-csi--node--driver--g8z2m-eth0" Jan 24 00:39:19.353136 containerd[1988]: 2026-01-24 00:39:19.345 [INFO][5833] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" HandleID="k8s-pod-network.651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" Workload="ip--172--31--23--37-k8s-csi--node--driver--g8z2m-eth0" Jan 24 00:39:19.353136 containerd[1988]: 2026-01-24 00:39:19.348 [INFO][5833] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:39:19.353136 containerd[1988]: 2026-01-24 00:39:19.350 [INFO][5825] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" Jan 24 00:39:19.353739 containerd[1988]: time="2026-01-24T00:39:19.353200471Z" level=info msg="TearDown network for sandbox \"651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b\" successfully" Jan 24 00:39:19.353739 containerd[1988]: time="2026-01-24T00:39:19.353229457Z" level=info msg="StopPodSandbox for \"651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b\" returns successfully" Jan 24 00:39:19.364499 containerd[1988]: time="2026-01-24T00:39:19.364420835Z" level=info msg="RemovePodSandbox for \"651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b\"" Jan 24 00:39:19.364499 containerd[1988]: time="2026-01-24T00:39:19.364458693Z" level=info msg="Forcibly stopping sandbox \"651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b\"" Jan 24 00:39:19.448431 containerd[1988]: 2026-01-24 00:39:19.407 [WARNING][5847] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--37-k8s-csi--node--driver--g8z2m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"08028277-ca96-466b-b85d-b33e87d62943", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 38, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-37", ContainerID:"313c38bae171ad834a4027c723ad907c4c837ec4201eaa02ee844e006b853f08", Pod:"csi-node-driver-g8z2m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic80f90f8826", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:39:19.448431 containerd[1988]: 2026-01-24 00:39:19.408 [INFO][5847] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" Jan 24 00:39:19.448431 containerd[1988]: 2026-01-24 00:39:19.408 [INFO][5847] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" iface="eth0" netns="" Jan 24 00:39:19.448431 containerd[1988]: 2026-01-24 00:39:19.408 [INFO][5847] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" Jan 24 00:39:19.448431 containerd[1988]: 2026-01-24 00:39:19.408 [INFO][5847] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" Jan 24 00:39:19.448431 containerd[1988]: 2026-01-24 00:39:19.431 [INFO][5854] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" HandleID="k8s-pod-network.651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" Workload="ip--172--31--23--37-k8s-csi--node--driver--g8z2m-eth0" Jan 24 00:39:19.448431 containerd[1988]: 2026-01-24 00:39:19.431 [INFO][5854] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:39:19.448431 containerd[1988]: 2026-01-24 00:39:19.431 [INFO][5854] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:39:19.448431 containerd[1988]: 2026-01-24 00:39:19.439 [WARNING][5854] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" HandleID="k8s-pod-network.651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" Workload="ip--172--31--23--37-k8s-csi--node--driver--g8z2m-eth0" Jan 24 00:39:19.448431 containerd[1988]: 2026-01-24 00:39:19.439 [INFO][5854] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" HandleID="k8s-pod-network.651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" Workload="ip--172--31--23--37-k8s-csi--node--driver--g8z2m-eth0" Jan 24 00:39:19.448431 containerd[1988]: 2026-01-24 00:39:19.441 [INFO][5854] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:39:19.448431 containerd[1988]: 2026-01-24 00:39:19.445 [INFO][5847] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b" Jan 24 00:39:19.448431 containerd[1988]: time="2026-01-24T00:39:19.447396798Z" level=info msg="TearDown network for sandbox \"651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b\" successfully" Jan 24 00:39:19.453214 containerd[1988]: time="2026-01-24T00:39:19.453169335Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:39:19.453337 containerd[1988]: time="2026-01-24T00:39:19.453242783Z" level=info msg="RemovePodSandbox \"651dfcede51ef7f622b4016bc56b48f2125b793214fceb99d4a98bfe88dfbe5b\" returns successfully" Jan 24 00:39:19.453871 containerd[1988]: time="2026-01-24T00:39:19.453837215Z" level=info msg="StopPodSandbox for \"91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233\"" Jan 24 00:39:19.529036 containerd[1988]: 2026-01-24 00:39:19.490 [WARNING][5868] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--phlb5-eth0", GenerateName:"calico-apiserver-5d8fb494d-", Namespace:"calico-apiserver", SelfLink:"", UID:"6c246b84-9265-4837-8997-3779f5365703", ResourceVersion:"1159", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 38, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d8fb494d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-37", ContainerID:"e555e26c30ed310ae0b19e00a79b2ab00e506f84bdd8d543527052f52fec738b", Pod:"calico-apiserver-5d8fb494d-phlb5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7c6e33f9202", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:39:19.529036 containerd[1988]: 2026-01-24 00:39:19.491 [INFO][5868] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" Jan 24 00:39:19.529036 containerd[1988]: 2026-01-24 00:39:19.491 [INFO][5868] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" iface="eth0" netns="" Jan 24 00:39:19.529036 containerd[1988]: 2026-01-24 00:39:19.491 [INFO][5868] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" Jan 24 00:39:19.529036 containerd[1988]: 2026-01-24 00:39:19.491 [INFO][5868] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" Jan 24 00:39:19.529036 containerd[1988]: 2026-01-24 00:39:19.516 [INFO][5875] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" HandleID="k8s-pod-network.91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" Workload="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--phlb5-eth0" Jan 24 00:39:19.529036 containerd[1988]: 2026-01-24 00:39:19.516 [INFO][5875] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:39:19.529036 containerd[1988]: 2026-01-24 00:39:19.516 [INFO][5875] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:39:19.529036 containerd[1988]: 2026-01-24 00:39:19.522 [WARNING][5875] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" HandleID="k8s-pod-network.91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" Workload="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--phlb5-eth0" Jan 24 00:39:19.529036 containerd[1988]: 2026-01-24 00:39:19.522 [INFO][5875] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" HandleID="k8s-pod-network.91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" Workload="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--phlb5-eth0" Jan 24 00:39:19.529036 containerd[1988]: 2026-01-24 00:39:19.523 [INFO][5875] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:39:19.529036 containerd[1988]: 2026-01-24 00:39:19.526 [INFO][5868] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" Jan 24 00:39:19.529036 containerd[1988]: time="2026-01-24T00:39:19.528045612Z" level=info msg="TearDown network for sandbox \"91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233\" successfully" Jan 24 00:39:19.529036 containerd[1988]: time="2026-01-24T00:39:19.528071960Z" level=info msg="StopPodSandbox for \"91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233\" returns successfully" Jan 24 00:39:19.529036 containerd[1988]: time="2026-01-24T00:39:19.528567710Z" level=info msg="RemovePodSandbox for \"91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233\"" Jan 24 00:39:19.529036 containerd[1988]: time="2026-01-24T00:39:19.528604524Z" level=info msg="Forcibly stopping sandbox \"91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233\"" Jan 24 00:39:19.615311 containerd[1988]: 2026-01-24 00:39:19.566 [WARNING][5889] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--phlb5-eth0", GenerateName:"calico-apiserver-5d8fb494d-", Namespace:"calico-apiserver", SelfLink:"", UID:"6c246b84-9265-4837-8997-3779f5365703", ResourceVersion:"1159", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 38, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d8fb494d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-37", ContainerID:"e555e26c30ed310ae0b19e00a79b2ab00e506f84bdd8d543527052f52fec738b", Pod:"calico-apiserver-5d8fb494d-phlb5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7c6e33f9202", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:39:19.615311 containerd[1988]: 2026-01-24 00:39:19.566 [INFO][5889] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" Jan 24 00:39:19.615311 containerd[1988]: 2026-01-24 00:39:19.566 [INFO][5889] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" iface="eth0" netns="" Jan 24 00:39:19.615311 containerd[1988]: 2026-01-24 00:39:19.566 [INFO][5889] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" Jan 24 00:39:19.615311 containerd[1988]: 2026-01-24 00:39:19.566 [INFO][5889] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" Jan 24 00:39:19.615311 containerd[1988]: 2026-01-24 00:39:19.599 [INFO][5896] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" HandleID="k8s-pod-network.91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" Workload="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--phlb5-eth0" Jan 24 00:39:19.615311 containerd[1988]: 2026-01-24 00:39:19.599 [INFO][5896] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:39:19.615311 containerd[1988]: 2026-01-24 00:39:19.599 [INFO][5896] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:39:19.615311 containerd[1988]: 2026-01-24 00:39:19.608 [WARNING][5896] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" HandleID="k8s-pod-network.91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" Workload="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--phlb5-eth0" Jan 24 00:39:19.615311 containerd[1988]: 2026-01-24 00:39:19.608 [INFO][5896] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" HandleID="k8s-pod-network.91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" Workload="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--phlb5-eth0" Jan 24 00:39:19.615311 containerd[1988]: 2026-01-24 00:39:19.610 [INFO][5896] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:39:19.615311 containerd[1988]: 2026-01-24 00:39:19.612 [INFO][5889] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233" Jan 24 00:39:19.615311 containerd[1988]: time="2026-01-24T00:39:19.614054831Z" level=info msg="TearDown network for sandbox \"91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233\" successfully" Jan 24 00:39:19.617944 containerd[1988]: time="2026-01-24T00:39:19.617860603Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:39:19.617944 containerd[1988]: time="2026-01-24T00:39:19.617939855Z" level=info msg="RemovePodSandbox \"91bb90b28a6562eea6bc74ed7be2db8d1472d69a8cd2d92ca8229cab4b8ea233\" returns successfully" Jan 24 00:39:19.618898 containerd[1988]: time="2026-01-24T00:39:19.618478584Z" level=info msg="StopPodSandbox for \"04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6\"" Jan 24 00:39:19.695239 containerd[1988]: 2026-01-24 00:39:19.656 [WARNING][5910] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" WorkloadEndpoint="ip--172--31--23--37-k8s-whisker--56d78b5697--k5nq5-eth0" Jan 24 00:39:19.695239 containerd[1988]: 2026-01-24 00:39:19.657 [INFO][5910] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" Jan 24 00:39:19.695239 containerd[1988]: 2026-01-24 00:39:19.657 [INFO][5910] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" iface="eth0" netns="" Jan 24 00:39:19.695239 containerd[1988]: 2026-01-24 00:39:19.657 [INFO][5910] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" Jan 24 00:39:19.695239 containerd[1988]: 2026-01-24 00:39:19.657 [INFO][5910] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" Jan 24 00:39:19.695239 containerd[1988]: 2026-01-24 00:39:19.682 [INFO][5917] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" HandleID="k8s-pod-network.04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" Workload="ip--172--31--23--37-k8s-whisker--56d78b5697--k5nq5-eth0" Jan 24 00:39:19.695239 containerd[1988]: 2026-01-24 00:39:19.683 [INFO][5917] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:39:19.695239 containerd[1988]: 2026-01-24 00:39:19.683 [INFO][5917] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:39:19.695239 containerd[1988]: 2026-01-24 00:39:19.689 [WARNING][5917] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" HandleID="k8s-pod-network.04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" Workload="ip--172--31--23--37-k8s-whisker--56d78b5697--k5nq5-eth0" Jan 24 00:39:19.695239 containerd[1988]: 2026-01-24 00:39:19.689 [INFO][5917] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" HandleID="k8s-pod-network.04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" Workload="ip--172--31--23--37-k8s-whisker--56d78b5697--k5nq5-eth0" Jan 24 00:39:19.695239 containerd[1988]: 2026-01-24 00:39:19.691 [INFO][5917] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:39:19.695239 containerd[1988]: 2026-01-24 00:39:19.693 [INFO][5910] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" Jan 24 00:39:19.696144 containerd[1988]: time="2026-01-24T00:39:19.695302561Z" level=info msg="TearDown network for sandbox \"04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6\" successfully" Jan 24 00:39:19.696144 containerd[1988]: time="2026-01-24T00:39:19.695343316Z" level=info msg="StopPodSandbox for \"04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6\" returns successfully" Jan 24 00:39:19.696937 containerd[1988]: time="2026-01-24T00:39:19.696440970Z" level=info msg="RemovePodSandbox for \"04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6\"" Jan 24 00:39:19.696937 containerd[1988]: time="2026-01-24T00:39:19.696486209Z" level=info msg="Forcibly stopping sandbox \"04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6\"" Jan 24 00:39:19.794485 containerd[1988]: 2026-01-24 00:39:19.734 [WARNING][5931] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" WorkloadEndpoint="ip--172--31--23--37-k8s-whisker--56d78b5697--k5nq5-eth0" Jan 24 00:39:19.794485 containerd[1988]: 2026-01-24 00:39:19.734 [INFO][5931] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" Jan 24 00:39:19.794485 containerd[1988]: 2026-01-24 00:39:19.734 [INFO][5931] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" iface="eth0" netns="" Jan 24 00:39:19.794485 containerd[1988]: 2026-01-24 00:39:19.734 [INFO][5931] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" Jan 24 00:39:19.794485 containerd[1988]: 2026-01-24 00:39:19.734 [INFO][5931] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" Jan 24 00:39:19.794485 containerd[1988]: 2026-01-24 00:39:19.778 [INFO][5938] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" HandleID="k8s-pod-network.04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" Workload="ip--172--31--23--37-k8s-whisker--56d78b5697--k5nq5-eth0" Jan 24 00:39:19.794485 containerd[1988]: 2026-01-24 00:39:19.778 [INFO][5938] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:39:19.794485 containerd[1988]: 2026-01-24 00:39:19.778 [INFO][5938] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:39:19.794485 containerd[1988]: 2026-01-24 00:39:19.785 [WARNING][5938] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" HandleID="k8s-pod-network.04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" Workload="ip--172--31--23--37-k8s-whisker--56d78b5697--k5nq5-eth0" Jan 24 00:39:19.794485 containerd[1988]: 2026-01-24 00:39:19.785 [INFO][5938] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" HandleID="k8s-pod-network.04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" Workload="ip--172--31--23--37-k8s-whisker--56d78b5697--k5nq5-eth0" Jan 24 00:39:19.794485 containerd[1988]: 2026-01-24 00:39:19.788 [INFO][5938] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:39:19.794485 containerd[1988]: 2026-01-24 00:39:19.790 [INFO][5931] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6" Jan 24 00:39:19.794485 containerd[1988]: time="2026-01-24T00:39:19.793007658Z" level=info msg="TearDown network for sandbox \"04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6\" successfully" Jan 24 00:39:19.799587 containerd[1988]: time="2026-01-24T00:39:19.799538438Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:39:19.799745 containerd[1988]: time="2026-01-24T00:39:19.799623124Z" level=info msg="RemovePodSandbox \"04a3b6f67456eaaaffa954b31fb8feb31e2fa64982b497ccc8743572b12d80d6\" returns successfully" Jan 24 00:39:19.800315 containerd[1988]: time="2026-01-24T00:39:19.800281707Z" level=info msg="StopPodSandbox for \"6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315\"" Jan 24 00:39:19.875733 containerd[1988]: 2026-01-24 00:39:19.838 [WARNING][5952] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--tmnz4-eth0", GenerateName:"calico-apiserver-5d8fb494d-", Namespace:"calico-apiserver", SelfLink:"", UID:"559b3199-5162-436c-ae6f-2ec7000948df", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 38, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d8fb494d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-37", ContainerID:"34efd203cc78447b7cfb14ed4bcacae760aa3f75cdeb929ac0ec6ec822b16dab", Pod:"calico-apiserver-5d8fb494d-tmnz4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4933b834dcd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:39:19.875733 containerd[1988]: 2026-01-24 00:39:19.838 [INFO][5952] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" Jan 24 00:39:19.875733 containerd[1988]: 2026-01-24 00:39:19.838 [INFO][5952] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" iface="eth0" netns="" Jan 24 00:39:19.875733 containerd[1988]: 2026-01-24 00:39:19.839 [INFO][5952] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" Jan 24 00:39:19.875733 containerd[1988]: 2026-01-24 00:39:19.839 [INFO][5952] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" Jan 24 00:39:19.875733 containerd[1988]: 2026-01-24 00:39:19.862 [INFO][5959] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" HandleID="k8s-pod-network.6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" Workload="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--tmnz4-eth0" Jan 24 00:39:19.875733 containerd[1988]: 2026-01-24 00:39:19.863 [INFO][5959] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:39:19.875733 containerd[1988]: 2026-01-24 00:39:19.863 [INFO][5959] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:39:19.875733 containerd[1988]: 2026-01-24 00:39:19.869 [WARNING][5959] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" HandleID="k8s-pod-network.6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" Workload="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--tmnz4-eth0" Jan 24 00:39:19.875733 containerd[1988]: 2026-01-24 00:39:19.869 [INFO][5959] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" HandleID="k8s-pod-network.6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" Workload="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--tmnz4-eth0" Jan 24 00:39:19.875733 containerd[1988]: 2026-01-24 00:39:19.871 [INFO][5959] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:39:19.875733 containerd[1988]: 2026-01-24 00:39:19.873 [INFO][5952] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" Jan 24 00:39:19.876626 containerd[1988]: time="2026-01-24T00:39:19.875846463Z" level=info msg="TearDown network for sandbox \"6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315\" successfully" Jan 24 00:39:19.876626 containerd[1988]: time="2026-01-24T00:39:19.875877481Z" level=info msg="StopPodSandbox for \"6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315\" returns successfully" Jan 24 00:39:19.876932 containerd[1988]: time="2026-01-24T00:39:19.876893979Z" level=info msg="RemovePodSandbox for \"6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315\"" Jan 24 00:39:19.877006 containerd[1988]: time="2026-01-24T00:39:19.876931431Z" level=info msg="Forcibly stopping sandbox \"6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315\"" Jan 24 00:39:19.954675 containerd[1988]: 2026-01-24 00:39:19.916 [WARNING][5973] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--tmnz4-eth0", GenerateName:"calico-apiserver-5d8fb494d-", Namespace:"calico-apiserver", SelfLink:"", UID:"559b3199-5162-436c-ae6f-2ec7000948df", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 38, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d8fb494d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-37", ContainerID:"34efd203cc78447b7cfb14ed4bcacae760aa3f75cdeb929ac0ec6ec822b16dab", Pod:"calico-apiserver-5d8fb494d-tmnz4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4933b834dcd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:39:19.954675 containerd[1988]: 2026-01-24 00:39:19.916 [INFO][5973] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" Jan 24 00:39:19.954675 containerd[1988]: 2026-01-24 00:39:19.916 [INFO][5973] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" iface="eth0" netns="" Jan 24 00:39:19.954675 containerd[1988]: 2026-01-24 00:39:19.916 [INFO][5973] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" Jan 24 00:39:19.954675 containerd[1988]: 2026-01-24 00:39:19.916 [INFO][5973] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" Jan 24 00:39:19.954675 containerd[1988]: 2026-01-24 00:39:19.941 [INFO][5980] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" HandleID="k8s-pod-network.6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" Workload="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--tmnz4-eth0" Jan 24 00:39:19.954675 containerd[1988]: 2026-01-24 00:39:19.941 [INFO][5980] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:39:19.954675 containerd[1988]: 2026-01-24 00:39:19.941 [INFO][5980] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:39:19.954675 containerd[1988]: 2026-01-24 00:39:19.949 [WARNING][5980] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" HandleID="k8s-pod-network.6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" Workload="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--tmnz4-eth0" Jan 24 00:39:19.954675 containerd[1988]: 2026-01-24 00:39:19.949 [INFO][5980] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" HandleID="k8s-pod-network.6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" Workload="ip--172--31--23--37-k8s-calico--apiserver--5d8fb494d--tmnz4-eth0" Jan 24 00:39:19.954675 containerd[1988]: 2026-01-24 00:39:19.950 [INFO][5980] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:39:19.954675 containerd[1988]: 2026-01-24 00:39:19.952 [INFO][5973] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315" Jan 24 00:39:19.955145 containerd[1988]: time="2026-01-24T00:39:19.954721029Z" level=info msg="TearDown network for sandbox \"6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315\" successfully" Jan 24 00:39:19.959414 containerd[1988]: time="2026-01-24T00:39:19.959354049Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:39:19.959628 containerd[1988]: time="2026-01-24T00:39:19.959431250Z" level=info msg="RemovePodSandbox \"6998c3292501e33c11a4c86e033d44a73874296eaf2aefb44d450e581b0ab315\" returns successfully" Jan 24 00:39:19.960845 containerd[1988]: time="2026-01-24T00:39:19.960373327Z" level=info msg="StopPodSandbox for \"1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17\"" Jan 24 00:39:20.049146 containerd[1988]: 2026-01-24 00:39:20.005 [WARNING][5994] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--37-k8s-goldmane--666569f655--qnfl2-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e1f50d23-3a90-4692-90b0-6d62e0594e46", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 38, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-37", ContainerID:"41d9f618d226327687c58b1379cb606b2b0a867af25d5b43f200de5cba868f82", Pod:"goldmane-666569f655-qnfl2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali321b7c331af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:39:20.049146 containerd[1988]: 2026-01-24 00:39:20.005 [INFO][5994] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" Jan 24 00:39:20.049146 containerd[1988]: 2026-01-24 00:39:20.005 [INFO][5994] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" iface="eth0" netns="" Jan 24 00:39:20.049146 containerd[1988]: 2026-01-24 00:39:20.005 [INFO][5994] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" Jan 24 00:39:20.049146 containerd[1988]: 2026-01-24 00:39:20.005 [INFO][5994] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" Jan 24 00:39:20.049146 containerd[1988]: 2026-01-24 00:39:20.034 [INFO][6001] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" HandleID="k8s-pod-network.1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" Workload="ip--172--31--23--37-k8s-goldmane--666569f655--qnfl2-eth0" Jan 24 00:39:20.049146 containerd[1988]: 2026-01-24 00:39:20.034 [INFO][6001] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:39:20.049146 containerd[1988]: 2026-01-24 00:39:20.034 [INFO][6001] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:39:20.049146 containerd[1988]: 2026-01-24 00:39:20.043 [WARNING][6001] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" HandleID="k8s-pod-network.1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" Workload="ip--172--31--23--37-k8s-goldmane--666569f655--qnfl2-eth0" Jan 24 00:39:20.049146 containerd[1988]: 2026-01-24 00:39:20.043 [INFO][6001] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" HandleID="k8s-pod-network.1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" Workload="ip--172--31--23--37-k8s-goldmane--666569f655--qnfl2-eth0" Jan 24 00:39:20.049146 containerd[1988]: 2026-01-24 00:39:20.045 [INFO][6001] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:39:20.049146 containerd[1988]: 2026-01-24 00:39:20.047 [INFO][5994] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" Jan 24 00:39:20.050466 containerd[1988]: time="2026-01-24T00:39:20.049981048Z" level=info msg="TearDown network for sandbox \"1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17\" successfully" Jan 24 00:39:20.050466 containerd[1988]: time="2026-01-24T00:39:20.050021556Z" level=info msg="StopPodSandbox for \"1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17\" returns successfully" Jan 24 00:39:20.051948 containerd[1988]: time="2026-01-24T00:39:20.051296503Z" level=info msg="RemovePodSandbox for \"1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17\"" Jan 24 00:39:20.051948 containerd[1988]: time="2026-01-24T00:39:20.051335954Z" level=info msg="Forcibly stopping sandbox \"1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17\"" Jan 24 00:39:20.132518 containerd[1988]: 2026-01-24 00:39:20.093 [WARNING][6015] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--37-k8s-goldmane--666569f655--qnfl2-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e1f50d23-3a90-4692-90b0-6d62e0594e46", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 38, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-37", ContainerID:"41d9f618d226327687c58b1379cb606b2b0a867af25d5b43f200de5cba868f82", Pod:"goldmane-666569f655-qnfl2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali321b7c331af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:39:20.132518 containerd[1988]: 2026-01-24 00:39:20.094 [INFO][6015] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" Jan 24 00:39:20.132518 containerd[1988]: 2026-01-24 00:39:20.094 [INFO][6015] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" iface="eth0" netns="" Jan 24 00:39:20.132518 containerd[1988]: 2026-01-24 00:39:20.094 [INFO][6015] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" Jan 24 00:39:20.132518 containerd[1988]: 2026-01-24 00:39:20.094 [INFO][6015] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" Jan 24 00:39:20.132518 containerd[1988]: 2026-01-24 00:39:20.119 [INFO][6022] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" HandleID="k8s-pod-network.1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" Workload="ip--172--31--23--37-k8s-goldmane--666569f655--qnfl2-eth0" Jan 24 00:39:20.132518 containerd[1988]: 2026-01-24 00:39:20.119 [INFO][6022] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:39:20.132518 containerd[1988]: 2026-01-24 00:39:20.119 [INFO][6022] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:39:20.132518 containerd[1988]: 2026-01-24 00:39:20.126 [WARNING][6022] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" HandleID="k8s-pod-network.1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" Workload="ip--172--31--23--37-k8s-goldmane--666569f655--qnfl2-eth0" Jan 24 00:39:20.132518 containerd[1988]: 2026-01-24 00:39:20.126 [INFO][6022] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" HandleID="k8s-pod-network.1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" Workload="ip--172--31--23--37-k8s-goldmane--666569f655--qnfl2-eth0" Jan 24 00:39:20.132518 containerd[1988]: 2026-01-24 00:39:20.127 [INFO][6022] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:39:20.132518 containerd[1988]: 2026-01-24 00:39:20.129 [INFO][6015] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17" Jan 24 00:39:20.133520 containerd[1988]: time="2026-01-24T00:39:20.132559853Z" level=info msg="TearDown network for sandbox \"1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17\" successfully" Jan 24 00:39:20.136994 containerd[1988]: time="2026-01-24T00:39:20.136954227Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:39:20.137303 containerd[1988]: time="2026-01-24T00:39:20.137009740Z" level=info msg="RemovePodSandbox \"1deb4e2cacbe8bb41b64c92157eaff09cd25609fcb5b988d97bd6c8d732bde17\" returns successfully" Jan 24 00:39:20.137509 containerd[1988]: time="2026-01-24T00:39:20.137480110Z" level=info msg="StopPodSandbox for \"2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b\"" Jan 24 00:39:20.218758 containerd[1988]: 2026-01-24 00:39:20.179 [WARNING][6037] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--37-k8s-coredns--668d6bf9bc--28dmx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1e4ae984-32f1-4342-8042-eb57d3f9ba21", ResourceVersion:"1150", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 38, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-37", ContainerID:"87809e45741a2034588f0e40f4006eed6b73e6780e3c746e4d20eb3ac934b510", Pod:"coredns-668d6bf9bc-28dmx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic917fff5863", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:39:20.218758 containerd[1988]: 2026-01-24 00:39:20.180 [INFO][6037] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" Jan 24 00:39:20.218758 containerd[1988]: 2026-01-24 00:39:20.180 [INFO][6037] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" iface="eth0" netns="" Jan 24 00:39:20.218758 containerd[1988]: 2026-01-24 00:39:20.180 [INFO][6037] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" Jan 24 00:39:20.218758 containerd[1988]: 2026-01-24 00:39:20.180 [INFO][6037] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" Jan 24 00:39:20.218758 containerd[1988]: 2026-01-24 00:39:20.205 [INFO][6044] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" HandleID="k8s-pod-network.2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" Workload="ip--172--31--23--37-k8s-coredns--668d6bf9bc--28dmx-eth0" Jan 24 00:39:20.218758 containerd[1988]: 2026-01-24 00:39:20.205 [INFO][6044] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:39:20.218758 containerd[1988]: 2026-01-24 00:39:20.205 [INFO][6044] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:39:20.218758 containerd[1988]: 2026-01-24 00:39:20.212 [WARNING][6044] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" HandleID="k8s-pod-network.2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" Workload="ip--172--31--23--37-k8s-coredns--668d6bf9bc--28dmx-eth0" Jan 24 00:39:20.218758 containerd[1988]: 2026-01-24 00:39:20.212 [INFO][6044] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" HandleID="k8s-pod-network.2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" Workload="ip--172--31--23--37-k8s-coredns--668d6bf9bc--28dmx-eth0" Jan 24 00:39:20.218758 containerd[1988]: 2026-01-24 00:39:20.214 [INFO][6044] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:39:20.218758 containerd[1988]: 2026-01-24 00:39:20.216 [INFO][6037] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" Jan 24 00:39:20.221502 containerd[1988]: time="2026-01-24T00:39:20.218799552Z" level=info msg="TearDown network for sandbox \"2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b\" successfully" Jan 24 00:39:20.221502 containerd[1988]: time="2026-01-24T00:39:20.218822115Z" level=info msg="StopPodSandbox for \"2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b\" returns successfully" Jan 24 00:39:20.221502 containerd[1988]: time="2026-01-24T00:39:20.219563145Z" level=info msg="RemovePodSandbox for \"2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b\"" Jan 24 00:39:20.221502 containerd[1988]: time="2026-01-24T00:39:20.219590817Z" level=info msg="Forcibly stopping sandbox \"2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b\"" Jan 24 00:39:20.294807 containerd[1988]: 2026-01-24 00:39:20.254 [WARNING][6058] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--37-k8s-coredns--668d6bf9bc--28dmx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1e4ae984-32f1-4342-8042-eb57d3f9ba21", ResourceVersion:"1150", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 38, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-37", ContainerID:"87809e45741a2034588f0e40f4006eed6b73e6780e3c746e4d20eb3ac934b510", Pod:"coredns-668d6bf9bc-28dmx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic917fff5863", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:39:20.294807 containerd[1988]: 2026-01-24 00:39:20.254 [INFO][6058] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" Jan 24 00:39:20.294807 containerd[1988]: 2026-01-24 00:39:20.254 [INFO][6058] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" iface="eth0" netns="" Jan 24 00:39:20.294807 containerd[1988]: 2026-01-24 00:39:20.254 [INFO][6058] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" Jan 24 00:39:20.294807 containerd[1988]: 2026-01-24 00:39:20.254 [INFO][6058] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" Jan 24 00:39:20.294807 containerd[1988]: 2026-01-24 00:39:20.281 [INFO][6065] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" HandleID="k8s-pod-network.2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" Workload="ip--172--31--23--37-k8s-coredns--668d6bf9bc--28dmx-eth0" Jan 24 00:39:20.294807 containerd[1988]: 2026-01-24 00:39:20.281 [INFO][6065] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:39:20.294807 containerd[1988]: 2026-01-24 00:39:20.281 [INFO][6065] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:39:20.294807 containerd[1988]: 2026-01-24 00:39:20.288 [WARNING][6065] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" HandleID="k8s-pod-network.2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" Workload="ip--172--31--23--37-k8s-coredns--668d6bf9bc--28dmx-eth0" Jan 24 00:39:20.294807 containerd[1988]: 2026-01-24 00:39:20.288 [INFO][6065] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" HandleID="k8s-pod-network.2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" Workload="ip--172--31--23--37-k8s-coredns--668d6bf9bc--28dmx-eth0" Jan 24 00:39:20.294807 containerd[1988]: 2026-01-24 00:39:20.290 [INFO][6065] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:39:20.294807 containerd[1988]: 2026-01-24 00:39:20.292 [INFO][6058] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b" Jan 24 00:39:20.295657 containerd[1988]: time="2026-01-24T00:39:20.295613511Z" level=info msg="TearDown network for sandbox \"2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b\" successfully" Jan 24 00:39:20.300252 containerd[1988]: time="2026-01-24T00:39:20.300037953Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:39:20.300252 containerd[1988]: time="2026-01-24T00:39:20.300102792Z" level=info msg="RemovePodSandbox \"2e5dc66ef0bdc7840e0e0fe748dee9d7f81de7a96fa7c04a2b436a735fa7195b\" returns successfully" Jan 24 00:39:20.302049 containerd[1988]: time="2026-01-24T00:39:20.302002546Z" level=info msg="StopPodSandbox for \"dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e\"" Jan 24 00:39:20.378965 containerd[1988]: 2026-01-24 00:39:20.337 [WARNING][6079] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--37-k8s-coredns--668d6bf9bc--h7m4v-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"29f29deb-ec14-4cf7-a095-b62aa4c4a912", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 38, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-37", ContainerID:"8bed8011c94eebeae25982d7a8fe53e755603c644ff068a8e086ac60f0b3f5fb", Pod:"coredns-668d6bf9bc-h7m4v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib71042c61e7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:39:20.378965 containerd[1988]: 2026-01-24 00:39:20.338 [INFO][6079] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" Jan 24 00:39:20.378965 containerd[1988]: 2026-01-24 00:39:20.338 [INFO][6079] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" iface="eth0" netns="" Jan 24 00:39:20.378965 containerd[1988]: 2026-01-24 00:39:20.338 [INFO][6079] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" Jan 24 00:39:20.378965 containerd[1988]: 2026-01-24 00:39:20.338 [INFO][6079] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" Jan 24 00:39:20.378965 containerd[1988]: 2026-01-24 00:39:20.362 [INFO][6086] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" HandleID="k8s-pod-network.dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" Workload="ip--172--31--23--37-k8s-coredns--668d6bf9bc--h7m4v-eth0" Jan 24 00:39:20.378965 containerd[1988]: 2026-01-24 00:39:20.363 [INFO][6086] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:39:20.378965 containerd[1988]: 2026-01-24 00:39:20.363 [INFO][6086] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:39:20.378965 containerd[1988]: 2026-01-24 00:39:20.370 [WARNING][6086] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" HandleID="k8s-pod-network.dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" Workload="ip--172--31--23--37-k8s-coredns--668d6bf9bc--h7m4v-eth0" Jan 24 00:39:20.378965 containerd[1988]: 2026-01-24 00:39:20.370 [INFO][6086] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" HandleID="k8s-pod-network.dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" Workload="ip--172--31--23--37-k8s-coredns--668d6bf9bc--h7m4v-eth0" Jan 24 00:39:20.378965 containerd[1988]: 2026-01-24 00:39:20.372 [INFO][6086] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:39:20.378965 containerd[1988]: 2026-01-24 00:39:20.375 [INFO][6079] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" Jan 24 00:39:20.379711 containerd[1988]: time="2026-01-24T00:39:20.379010519Z" level=info msg="TearDown network for sandbox \"dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e\" successfully" Jan 24 00:39:20.379711 containerd[1988]: time="2026-01-24T00:39:20.379041022Z" level=info msg="StopPodSandbox for \"dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e\" returns successfully" Jan 24 00:39:20.379711 containerd[1988]: time="2026-01-24T00:39:20.379647710Z" level=info msg="RemovePodSandbox for \"dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e\"" Jan 24 00:39:20.379711 containerd[1988]: time="2026-01-24T00:39:20.379678282Z" level=info msg="Forcibly stopping sandbox \"dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e\"" Jan 24 00:39:20.459249 containerd[1988]: 2026-01-24 00:39:20.419 [WARNING][6100] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--37-k8s-coredns--668d6bf9bc--h7m4v-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"29f29deb-ec14-4cf7-a095-b62aa4c4a912", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 38, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-37", ContainerID:"8bed8011c94eebeae25982d7a8fe53e755603c644ff068a8e086ac60f0b3f5fb", Pod:"coredns-668d6bf9bc-h7m4v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib71042c61e7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:39:20.459249 containerd[1988]: 2026-01-24 00:39:20.419 [INFO][6100] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" Jan 24 00:39:20.459249 containerd[1988]: 2026-01-24 00:39:20.419 [INFO][6100] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" iface="eth0" netns="" Jan 24 00:39:20.459249 containerd[1988]: 2026-01-24 00:39:20.419 [INFO][6100] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" Jan 24 00:39:20.459249 containerd[1988]: 2026-01-24 00:39:20.419 [INFO][6100] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" Jan 24 00:39:20.459249 containerd[1988]: 2026-01-24 00:39:20.445 [INFO][6108] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" HandleID="k8s-pod-network.dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" Workload="ip--172--31--23--37-k8s-coredns--668d6bf9bc--h7m4v-eth0" Jan 24 00:39:20.459249 containerd[1988]: 2026-01-24 00:39:20.445 [INFO][6108] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:39:20.459249 containerd[1988]: 2026-01-24 00:39:20.445 [INFO][6108] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:39:20.459249 containerd[1988]: 2026-01-24 00:39:20.452 [WARNING][6108] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" HandleID="k8s-pod-network.dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" Workload="ip--172--31--23--37-k8s-coredns--668d6bf9bc--h7m4v-eth0" Jan 24 00:39:20.459249 containerd[1988]: 2026-01-24 00:39:20.452 [INFO][6108] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" HandleID="k8s-pod-network.dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" Workload="ip--172--31--23--37-k8s-coredns--668d6bf9bc--h7m4v-eth0" Jan 24 00:39:20.459249 containerd[1988]: 2026-01-24 00:39:20.454 [INFO][6108] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:39:20.459249 containerd[1988]: 2026-01-24 00:39:20.456 [INFO][6100] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e" Jan 24 00:39:20.460514 containerd[1988]: time="2026-01-24T00:39:20.459297078Z" level=info msg="TearDown network for sandbox \"dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e\" successfully" Jan 24 00:39:20.464509 containerd[1988]: time="2026-01-24T00:39:20.464456842Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:39:20.464656 containerd[1988]: time="2026-01-24T00:39:20.464525193Z" level=info msg="RemovePodSandbox \"dfeade5d65ee4643a24222b1ecc4daed1e1fdb2b8facca27f4551221a91bcd9e\" returns successfully" Jan 24 00:39:20.465230 containerd[1988]: time="2026-01-24T00:39:20.465208759Z" level=info msg="StopPodSandbox for \"23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822\"" Jan 24 00:39:20.515776 systemd[1]: Started sshd@9-172.31.23.37:22-4.153.228.146:60034.service - OpenSSH per-connection server daemon (4.153.228.146:60034). Jan 24 00:39:20.566955 containerd[1988]: 2026-01-24 00:39:20.506 [WARNING][6123] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--37-k8s-calico--kube--controllers--6c5f78b9cf--nf2hx-eth0", GenerateName:"calico-kube-controllers-6c5f78b9cf-", Namespace:"calico-system", SelfLink:"", UID:"92126a9f-72bf-4007-b274-6c7bfe78315a", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 38, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c5f78b9cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-37", ContainerID:"532c185fa822b3fa9ed86c381a947149ca738e4a779340908d017a6f87addb47", Pod:"calico-kube-controllers-6c5f78b9cf-nf2hx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali53f58e4a5cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:39:20.566955 containerd[1988]: 2026-01-24 00:39:20.506 [INFO][6123] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" Jan 24 00:39:20.566955 containerd[1988]: 2026-01-24 00:39:20.506 [INFO][6123] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" iface="eth0" netns="" Jan 24 00:39:20.566955 containerd[1988]: 2026-01-24 00:39:20.506 [INFO][6123] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" Jan 24 00:39:20.566955 containerd[1988]: 2026-01-24 00:39:20.506 [INFO][6123] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" Jan 24 00:39:20.566955 containerd[1988]: 2026-01-24 00:39:20.545 [INFO][6131] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" HandleID="k8s-pod-network.23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" Workload="ip--172--31--23--37-k8s-calico--kube--controllers--6c5f78b9cf--nf2hx-eth0" Jan 24 00:39:20.566955 containerd[1988]: 2026-01-24 00:39:20.545 [INFO][6131] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:39:20.566955 containerd[1988]: 2026-01-24 00:39:20.545 [INFO][6131] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:39:20.566955 containerd[1988]: 2026-01-24 00:39:20.554 [WARNING][6131] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" HandleID="k8s-pod-network.23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" Workload="ip--172--31--23--37-k8s-calico--kube--controllers--6c5f78b9cf--nf2hx-eth0" Jan 24 00:39:20.566955 containerd[1988]: 2026-01-24 00:39:20.554 [INFO][6131] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" HandleID="k8s-pod-network.23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" Workload="ip--172--31--23--37-k8s-calico--kube--controllers--6c5f78b9cf--nf2hx-eth0" Jan 24 00:39:20.566955 containerd[1988]: 2026-01-24 00:39:20.560 [INFO][6131] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:39:20.566955 containerd[1988]: 2026-01-24 00:39:20.563 [INFO][6123] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" Jan 24 00:39:20.566955 containerd[1988]: time="2026-01-24T00:39:20.566854343Z" level=info msg="TearDown network for sandbox \"23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822\" successfully" Jan 24 00:39:20.566955 containerd[1988]: time="2026-01-24T00:39:20.566884391Z" level=info msg="StopPodSandbox for \"23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822\" returns successfully" Jan 24 00:39:20.570668 containerd[1988]: time="2026-01-24T00:39:20.567517351Z" level=info msg="RemovePodSandbox for \"23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822\"" Jan 24 00:39:20.570668 containerd[1988]: time="2026-01-24T00:39:20.567551028Z" level=info msg="Forcibly stopping sandbox \"23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822\"" Jan 24 00:39:20.650663 containerd[1988]: 2026-01-24 00:39:20.611 [WARNING][6149] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--37-k8s-calico--kube--controllers--6c5f78b9cf--nf2hx-eth0", GenerateName:"calico-kube-controllers-6c5f78b9cf-", Namespace:"calico-system", SelfLink:"", UID:"92126a9f-72bf-4007-b274-6c7bfe78315a", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 38, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c5f78b9cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-37", ContainerID:"532c185fa822b3fa9ed86c381a947149ca738e4a779340908d017a6f87addb47", Pod:"calico-kube-controllers-6c5f78b9cf-nf2hx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali53f58e4a5cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:39:20.650663 containerd[1988]: 2026-01-24 00:39:20.612 [INFO][6149] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" Jan 24 00:39:20.650663 containerd[1988]: 2026-01-24 00:39:20.612 [INFO][6149] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" iface="eth0" netns="" Jan 24 00:39:20.650663 containerd[1988]: 2026-01-24 00:39:20.612 [INFO][6149] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" Jan 24 00:39:20.650663 containerd[1988]: 2026-01-24 00:39:20.612 [INFO][6149] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" Jan 24 00:39:20.650663 containerd[1988]: 2026-01-24 00:39:20.635 [INFO][6158] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" HandleID="k8s-pod-network.23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" Workload="ip--172--31--23--37-k8s-calico--kube--controllers--6c5f78b9cf--nf2hx-eth0" Jan 24 00:39:20.650663 containerd[1988]: 2026-01-24 00:39:20.636 [INFO][6158] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:39:20.650663 containerd[1988]: 2026-01-24 00:39:20.636 [INFO][6158] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:39:20.650663 containerd[1988]: 2026-01-24 00:39:20.643 [WARNING][6158] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" HandleID="k8s-pod-network.23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" Workload="ip--172--31--23--37-k8s-calico--kube--controllers--6c5f78b9cf--nf2hx-eth0" Jan 24 00:39:20.650663 containerd[1988]: 2026-01-24 00:39:20.643 [INFO][6158] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" HandleID="k8s-pod-network.23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" Workload="ip--172--31--23--37-k8s-calico--kube--controllers--6c5f78b9cf--nf2hx-eth0" Jan 24 00:39:20.650663 containerd[1988]: 2026-01-24 00:39:20.646 [INFO][6158] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:39:20.650663 containerd[1988]: 2026-01-24 00:39:20.648 [INFO][6149] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822" Jan 24 00:39:20.651345 containerd[1988]: time="2026-01-24T00:39:20.650723610Z" level=info msg="TearDown network for sandbox \"23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822\" successfully" Jan 24 00:39:20.655601 containerd[1988]: time="2026-01-24T00:39:20.655557809Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:39:20.655777 containerd[1988]: time="2026-01-24T00:39:20.655619219Z" level=info msg="RemovePodSandbox \"23b85aa81a6f37d00af76196074c2d5a64be66219b96f6de72f21a21aa585822\" returns successfully" Jan 24 00:39:21.077554 sshd[6135]: Accepted publickey for core from 4.153.228.146 port 60034 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:39:21.083283 sshd[6135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:39:21.090145 systemd-logind[1968]: New session 10 of user core. Jan 24 00:39:21.097659 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 00:39:21.256284 containerd[1988]: time="2026-01-24T00:39:21.256091099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:39:21.527795 containerd[1988]: time="2026-01-24T00:39:21.527699098Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:39:21.529112 containerd[1988]: time="2026-01-24T00:39:21.529046492Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:39:21.529242 containerd[1988]: time="2026-01-24T00:39:21.529138232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:39:21.529337 kubelet[3194]: E0124 00:39:21.529296 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:39:21.529678 kubelet[3194]: E0124 00:39:21.529351 3194 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:39:21.529678 kubelet[3194]: E0124 00:39:21.529500 3194 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mfz58,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d8fb494d-tmnz4_calico-apiserver(559b3199-5162-436c-ae6f-2ec7000948df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:39:21.531029 kubelet[3194]: E0124 00:39:21.530973 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8fb494d-tmnz4" podUID="559b3199-5162-436c-ae6f-2ec7000948df" Jan 24 00:39:21.563700 sshd[6135]: pam_unix(sshd:session): session closed for user core Jan 24 00:39:21.568044 systemd-logind[1968]: Session 10 logged out. Waiting for processes to exit. Jan 24 00:39:21.569209 systemd[1]: sshd@9-172.31.23.37:22-4.153.228.146:60034.service: Deactivated successfully. Jan 24 00:39:21.571934 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 00:39:21.573209 systemd-logind[1968]: Removed session 10. Jan 24 00:39:21.651707 systemd[1]: Started sshd@10-172.31.23.37:22-4.153.228.146:60036.service - OpenSSH per-connection server daemon (4.153.228.146:60036). Jan 24 00:39:22.130672 sshd[6184]: Accepted publickey for core from 4.153.228.146 port 60036 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:39:22.132226 sshd[6184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:39:22.138200 systemd-logind[1968]: New session 11 of user core. Jan 24 00:39:22.145136 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 00:39:22.623128 sshd[6184]: pam_unix(sshd:session): session closed for user core Jan 24 00:39:22.630967 systemd-logind[1968]: Session 11 logged out. Waiting for processes to exit. Jan 24 00:39:22.632263 systemd[1]: sshd@10-172.31.23.37:22-4.153.228.146:60036.service: Deactivated successfully. Jan 24 00:39:22.634817 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 00:39:22.636475 systemd-logind[1968]: Removed session 11. Jan 24 00:39:22.721926 systemd[1]: Started sshd@11-172.31.23.37:22-4.153.228.146:60050.service - OpenSSH per-connection server daemon (4.153.228.146:60050). Jan 24 00:39:23.206320 sshd[6198]: Accepted publickey for core from 4.153.228.146 port 60050 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:39:23.207841 sshd[6198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:39:23.213449 systemd-logind[1968]: New session 12 of user core. Jan 24 00:39:23.218630 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 00:39:23.259758 containerd[1988]: time="2026-01-24T00:39:23.259718120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:39:23.535820 containerd[1988]: time="2026-01-24T00:39:23.535484847Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:39:23.537796 containerd[1988]: time="2026-01-24T00:39:23.537185397Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:39:23.537796 containerd[1988]: time="2026-01-24T00:39:23.537297187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:39:23.539821 kubelet[3194]: E0124 00:39:23.538091 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:39:23.539821 kubelet[3194]: E0124 00:39:23.538151 3194 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:39:23.539821 kubelet[3194]: E0124 00:39:23.538330 3194 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6c56f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6c5f78b9cf-nf2hx_calico-system(92126a9f-72bf-4007-b274-6c7bfe78315a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:39:23.541814 kubelet[3194]: E0124 00:39:23.541621 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c5f78b9cf-nf2hx" podUID="92126a9f-72bf-4007-b274-6c7bfe78315a" Jan 24 00:39:23.637624 sshd[6198]: pam_unix(sshd:session): session closed for user core Jan 24 00:39:23.640667 systemd[1]: sshd@11-172.31.23.37:22-4.153.228.146:60050.service: Deactivated successfully. Jan 24 00:39:23.643180 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 00:39:23.645539 systemd-logind[1968]: Session 12 logged out. Waiting for processes to exit. Jan 24 00:39:23.647009 systemd-logind[1968]: Removed session 12. Jan 24 00:39:24.257637 containerd[1988]: time="2026-01-24T00:39:24.257114101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:39:24.514352 containerd[1988]: time="2026-01-24T00:39:24.514222924Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:39:24.515914 containerd[1988]: time="2026-01-24T00:39:24.515840717Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:39:24.516011 containerd[1988]: time="2026-01-24T00:39:24.515926692Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:39:24.516154 kubelet[3194]: E0124 00:39:24.516112 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:39:24.516214 kubelet[3194]: E0124 00:39:24.516161 3194 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:39:24.516350 kubelet[3194]: E0124 00:39:24.516305 3194 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j48zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qnfl2_calico-system(e1f50d23-3a90-4692-90b0-6d62e0594e46): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:39:24.517863 kubelet[3194]: E0124 00:39:24.517811 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qnfl2" podUID="e1f50d23-3a90-4692-90b0-6d62e0594e46" Jan 24 00:39:25.257936 containerd[1988]: time="2026-01-24T00:39:25.257680580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:39:25.513663 containerd[1988]: time="2026-01-24T00:39:25.513528119Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:39:25.515124 containerd[1988]: time="2026-01-24T00:39:25.515077396Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:39:25.515663 containerd[1988]: time="2026-01-24T00:39:25.515107580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:39:25.515698 kubelet[3194]: E0124 00:39:25.515329 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:39:25.515698 kubelet[3194]: E0124 00:39:25.515391 3194 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:39:25.515698 kubelet[3194]: E0124 00:39:25.515516 3194 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68lqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-g8z2m_calico-system(08028277-ca96-466b-b85d-b33e87d62943): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:39:25.518218 containerd[1988]: time="2026-01-24T00:39:25.518180177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:39:25.783156 containerd[1988]: time="2026-01-24T00:39:25.783028640Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:39:25.785116 containerd[1988]: time="2026-01-24T00:39:25.785043310Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:39:25.785273 containerd[1988]: time="2026-01-24T00:39:25.785056251Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:39:25.785428 kubelet[3194]: E0124 00:39:25.785351 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:39:25.785526 kubelet[3194]: E0124 00:39:25.785440 3194 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:39:25.785662 kubelet[3194]: E0124 00:39:25.785618 3194 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68lqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-g8z2m_calico-system(08028277-ca96-466b-b85d-b33e87d62943): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:39:25.787200 kubelet[3194]: E0124 00:39:25.787144 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-g8z2m" podUID="08028277-ca96-466b-b85d-b33e87d62943" Jan 24 00:39:26.261440 kubelet[3194]: E0124 00:39:26.261399 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764954c6fc-ns4t5" podUID="e59d88b0-80a3-4d3a-8f96-ae389146720c" Jan 24 00:39:28.727703 systemd[1]: Started sshd@12-172.31.23.37:22-4.153.228.146:56680.service - OpenSSH per-connection server daemon (4.153.228.146:56680). Jan 24 00:39:29.207354 sshd[6214]: Accepted publickey for core from 4.153.228.146 port 56680 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:39:29.208969 sshd[6214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:39:29.214575 systemd-logind[1968]: New session 13 of user core. Jan 24 00:39:29.218615 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 00:39:29.258525 kubelet[3194]: E0124 00:39:29.257923 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8fb494d-phlb5" podUID="6c246b84-9265-4837-8997-3779f5365703" Jan 24 00:39:29.642777 sshd[6214]: pam_unix(sshd:session): session closed for user core Jan 24 00:39:29.648930 systemd-logind[1968]: Session 13 logged out. Waiting for processes to exit. Jan 24 00:39:29.649741 systemd[1]: sshd@12-172.31.23.37:22-4.153.228.146:56680.service: Deactivated successfully. Jan 24 00:39:29.652506 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 00:39:29.653823 systemd-logind[1968]: Removed session 13. Jan 24 00:39:29.731749 systemd[1]: Started sshd@13-172.31.23.37:22-4.153.228.146:56686.service - OpenSSH per-connection server daemon (4.153.228.146:56686). Jan 24 00:39:30.231764 sshd[6227]: Accepted publickey for core from 4.153.228.146 port 56686 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:39:30.233522 sshd[6227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:39:30.238868 systemd-logind[1968]: New session 14 of user core. Jan 24 00:39:30.243623 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 00:39:33.257679 kubelet[3194]: E0124 00:39:33.257596 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8fb494d-tmnz4" podUID="559b3199-5162-436c-ae6f-2ec7000948df" Jan 24 00:39:34.105438 sshd[6227]: pam_unix(sshd:session): session closed for user core Jan 24 00:39:34.113649 systemd[1]: sshd@13-172.31.23.37:22-4.153.228.146:56686.service: Deactivated successfully. Jan 24 00:39:34.116841 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 00:39:34.117908 systemd-logind[1968]: Session 14 logged out. Waiting for processes to exit. Jan 24 00:39:34.119257 systemd-logind[1968]: Removed session 14. Jan 24 00:39:34.195093 systemd[1]: Started sshd@14-172.31.23.37:22-4.153.228.146:56696.service - OpenSSH per-connection server daemon (4.153.228.146:56696). Jan 24 00:39:34.706297 sshd[6247]: Accepted publickey for core from 4.153.228.146 port 56696 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:39:34.718028 sshd[6247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:39:34.724646 systemd-logind[1968]: New session 15 of user core. Jan 24 00:39:34.728616 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 00:39:36.008628 sshd[6247]: pam_unix(sshd:session): session closed for user core Jan 24 00:39:36.012703 systemd[1]: sshd@14-172.31.23.37:22-4.153.228.146:56696.service: Deactivated successfully. Jan 24 00:39:36.016351 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 00:39:36.019923 systemd-logind[1968]: Session 15 logged out. Waiting for processes to exit. Jan 24 00:39:36.022517 systemd-logind[1968]: Removed session 15. Jan 24 00:39:36.092107 systemd[1]: Started sshd@15-172.31.23.37:22-4.153.228.146:45416.service - OpenSSH per-connection server daemon (4.153.228.146:45416). Jan 24 00:39:36.257725 kubelet[3194]: E0124 00:39:36.257677 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c5f78b9cf-nf2hx" podUID="92126a9f-72bf-4007-b274-6c7bfe78315a" Jan 24 00:39:36.604630 sshd[6288]: Accepted publickey for core from 4.153.228.146 port 45416 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:39:36.606313 sshd[6288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:39:36.612575 systemd-logind[1968]: New session 16 of user core. Jan 24 00:39:36.619594 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 00:39:37.243158 sshd[6288]: pam_unix(sshd:session): session closed for user core Jan 24 00:39:37.246315 systemd[1]: sshd@15-172.31.23.37:22-4.153.228.146:45416.service: Deactivated successfully. Jan 24 00:39:37.248335 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 00:39:37.252128 systemd-logind[1968]: Session 16 logged out. Waiting for processes to exit. Jan 24 00:39:37.255610 systemd-logind[1968]: Removed session 16. Jan 24 00:39:37.263274 kubelet[3194]: E0124 00:39:37.262720 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qnfl2" podUID="e1f50d23-3a90-4692-90b0-6d62e0594e46" Jan 24 00:39:37.267846 kubelet[3194]: E0124 00:39:37.265531 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-g8z2m" podUID="08028277-ca96-466b-b85d-b33e87d62943" Jan 24 00:39:37.331771 systemd[1]: Started sshd@16-172.31.23.37:22-4.153.228.146:45424.service - OpenSSH per-connection server daemon (4.153.228.146:45424). Jan 24 00:39:37.836240 sshd[6299]: Accepted publickey for core from 4.153.228.146 port 45424 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:39:37.837873 sshd[6299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:39:37.843314 systemd-logind[1968]: New session 17 of user core. Jan 24 00:39:37.846618 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 00:39:38.257574 sshd[6299]: pam_unix(sshd:session): session closed for user core Jan 24 00:39:38.261803 systemd-logind[1968]: Session 17 logged out. Waiting for processes to exit. Jan 24 00:39:38.263111 systemd[1]: sshd@16-172.31.23.37:22-4.153.228.146:45424.service: Deactivated successfully. Jan 24 00:39:38.267312 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 00:39:38.270881 systemd-logind[1968]: Removed session 17. Jan 24 00:39:40.257595 containerd[1988]: time="2026-01-24T00:39:40.256887511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:39:40.508256 containerd[1988]: time="2026-01-24T00:39:40.508124769Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:39:40.509668 containerd[1988]: time="2026-01-24T00:39:40.509546337Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:39:40.509668 containerd[1988]: time="2026-01-24T00:39:40.509619188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:39:40.510028 kubelet[3194]: E0124 00:39:40.509965 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:39:40.510028 kubelet[3194]: E0124 00:39:40.510020 3194 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:39:40.510656 kubelet[3194]: E0124 00:39:40.510256 3194 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68sfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d8fb494d-phlb5_calico-apiserver(6c246b84-9265-4837-8997-3779f5365703): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:39:40.510904 containerd[1988]: time="2026-01-24T00:39:40.510753329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:39:40.512045 kubelet[3194]: E0124 00:39:40.511996 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8fb494d-phlb5" podUID="6c246b84-9265-4837-8997-3779f5365703" Jan 24 00:39:40.784361 containerd[1988]: time="2026-01-24T00:39:40.784234097Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:39:40.785805 containerd[1988]: time="2026-01-24T00:39:40.785751743Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:39:40.785908 containerd[1988]: time="2026-01-24T00:39:40.785833842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:39:40.786051 kubelet[3194]: E0124 00:39:40.785984 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:39:40.786051 kubelet[3194]: E0124 00:39:40.786034 3194 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:39:40.786543 kubelet[3194]: E0124 00:39:40.786139 3194 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a61c7262178c49c787cf179bd2771f88,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-klmv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-764954c6fc-ns4t5_calico-system(e59d88b0-80a3-4d3a-8f96-ae389146720c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:39:40.788064 containerd[1988]: time="2026-01-24T00:39:40.788030778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:39:41.041672 containerd[1988]: time="2026-01-24T00:39:41.041541139Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:39:41.043064 containerd[1988]: time="2026-01-24T00:39:41.043006470Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:39:41.043230 containerd[1988]: time="2026-01-24T00:39:41.043100991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:39:41.043352 kubelet[3194]: E0124 00:39:41.043300 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:39:41.043447 kubelet[3194]: E0124 00:39:41.043364 3194 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:39:41.043540 kubelet[3194]: E0124 00:39:41.043502 3194 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-klmv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-764954c6fc-ns4t5_calico-system(e59d88b0-80a3-4d3a-8f96-ae389146720c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:39:41.045001 kubelet[3194]: E0124 00:39:41.044951 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764954c6fc-ns4t5" podUID="e59d88b0-80a3-4d3a-8f96-ae389146720c" Jan 24 00:39:43.343797 systemd[1]: Started sshd@17-172.31.23.37:22-4.153.228.146:45428.service - OpenSSH per-connection server daemon (4.153.228.146:45428). Jan 24 00:39:43.891768 sshd[6315]: Accepted publickey for core from 4.153.228.146 port 45428 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:39:43.894764 sshd[6315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:39:43.901339 systemd-logind[1968]: New session 18 of user core. Jan 24 00:39:43.904587 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 00:39:44.707692 sshd[6315]: pam_unix(sshd:session): session closed for user core Jan 24 00:39:44.710789 systemd[1]: sshd@17-172.31.23.37:22-4.153.228.146:45428.service: Deactivated successfully. Jan 24 00:39:44.714201 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 00:39:44.716610 systemd-logind[1968]: Session 18 logged out. Waiting for processes to exit. Jan 24 00:39:44.717789 systemd-logind[1968]: Removed session 18. Jan 24 00:39:46.259045 containerd[1988]: time="2026-01-24T00:39:46.258826113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:39:46.527869 containerd[1988]: time="2026-01-24T00:39:46.527738966Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:39:46.529429 containerd[1988]: time="2026-01-24T00:39:46.529238611Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:39:46.529429 containerd[1988]: time="2026-01-24T00:39:46.529265807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:39:46.529736 kubelet[3194]: E0124 00:39:46.529690 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:39:46.530074 kubelet[3194]: E0124 00:39:46.529748 3194 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:39:46.530074 kubelet[3194]: E0124 00:39:46.529885 3194 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mfz58,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d8fb494d-tmnz4_calico-apiserver(559b3199-5162-436c-ae6f-2ec7000948df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:39:46.531580 kubelet[3194]: E0124 00:39:46.531541 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8fb494d-tmnz4" podUID="559b3199-5162-436c-ae6f-2ec7000948df" Jan 24 00:39:48.257803 containerd[1988]: time="2026-01-24T00:39:48.256550201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:39:48.551279 containerd[1988]: time="2026-01-24T00:39:48.551152079Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:39:48.552587 containerd[1988]: time="2026-01-24T00:39:48.552535975Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:39:48.552904 containerd[1988]: time="2026-01-24T00:39:48.552574437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:39:48.552961 kubelet[3194]: E0124 00:39:48.552920 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:39:48.553261 kubelet[3194]: E0124 00:39:48.552968 3194 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:39:48.553261 kubelet[3194]: E0124 00:39:48.553100 3194 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6c56f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6c5f78b9cf-nf2hx_calico-system(92126a9f-72bf-4007-b274-6c7bfe78315a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:39:48.557488 kubelet[3194]: E0124 00:39:48.557397 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c5f78b9cf-nf2hx" podUID="92126a9f-72bf-4007-b274-6c7bfe78315a" Jan 24 00:39:49.796821 systemd[1]: Started sshd@18-172.31.23.37:22-4.153.228.146:55242.service - OpenSSH per-connection server daemon (4.153.228.146:55242). Jan 24 00:39:50.282108 sshd[6328]: Accepted publickey for core from 4.153.228.146 port 55242 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:39:50.283630 sshd[6328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:39:50.288530 systemd-logind[1968]: New session 19 of user core. Jan 24 00:39:50.296753 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 00:39:50.732653 sshd[6328]: pam_unix(sshd:session): session closed for user core Jan 24 00:39:50.736552 systemd-logind[1968]: Session 19 logged out. Waiting for processes to exit. Jan 24 00:39:50.737108 systemd[1]: sshd@18-172.31.23.37:22-4.153.228.146:55242.service: Deactivated successfully. Jan 24 00:39:50.740216 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 00:39:50.741280 systemd-logind[1968]: Removed session 19. Jan 24 00:39:51.258551 containerd[1988]: time="2026-01-24T00:39:51.258516899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:39:51.530144 containerd[1988]: time="2026-01-24T00:39:51.530020036Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:39:51.531512 containerd[1988]: time="2026-01-24T00:39:51.531458737Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:39:51.532428 containerd[1988]: time="2026-01-24T00:39:51.531550426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:39:51.532428 containerd[1988]: time="2026-01-24T00:39:51.532199632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:39:51.532529 kubelet[3194]: E0124 00:39:51.531743 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:39:51.532529 kubelet[3194]: E0124 00:39:51.531798 3194 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:39:51.532529 kubelet[3194]: E0124 00:39:51.532044 3194 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68lqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-g8z2m_calico-system(08028277-ca96-466b-b85d-b33e87d62943): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:39:51.790581 containerd[1988]: time="2026-01-24T00:39:51.790463284Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:39:51.795935 containerd[1988]: time="2026-01-24T00:39:51.795876959Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:39:51.796176 containerd[1988]: time="2026-01-24T00:39:51.795962511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:39:51.796232 kubelet[3194]: E0124 00:39:51.796121 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:39:51.796232 kubelet[3194]: E0124 00:39:51.796167 3194 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:39:51.796925 containerd[1988]: time="2026-01-24T00:39:51.796865039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:39:51.800561 kubelet[3194]: E0124 00:39:51.796449 3194 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j48zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qnfl2_calico-system(e1f50d23-3a90-4692-90b0-6d62e0594e46): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:39:51.801339 kubelet[3194]: E0124 00:39:51.801296 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qnfl2" podUID="e1f50d23-3a90-4692-90b0-6d62e0594e46" Jan 24 00:39:52.037103 containerd[1988]: time="2026-01-24T00:39:52.037055034Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:39:52.038510 containerd[1988]: time="2026-01-24T00:39:52.038456387Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:39:52.038666 containerd[1988]: time="2026-01-24T00:39:52.038536230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:39:52.038733 kubelet[3194]: E0124 00:39:52.038685 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:39:52.038827 kubelet[3194]: E0124 00:39:52.038732 3194 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:39:52.038877 kubelet[3194]: E0124 00:39:52.038840 3194 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68lqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-g8z2m_calico-system(08028277-ca96-466b-b85d-b33e87d62943): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:39:52.040422 kubelet[3194]: E0124 00:39:52.040277 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-g8z2m" podUID="08028277-ca96-466b-b85d-b33e87d62943" Jan 24 00:39:52.256400 kubelet[3194]: E0124 00:39:52.256182 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8fb494d-phlb5" podUID="6c246b84-9265-4837-8997-3779f5365703" Jan 24 00:39:55.258163 kubelet[3194]: E0124 00:39:55.258094 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764954c6fc-ns4t5" podUID="e59d88b0-80a3-4d3a-8f96-ae389146720c" Jan 24 00:39:55.818878 systemd[1]: Started sshd@19-172.31.23.37:22-4.153.228.146:37088.service - OpenSSH per-connection server daemon (4.153.228.146:37088). Jan 24 00:39:56.338256 sshd[6349]: Accepted publickey for core from 4.153.228.146 port 37088 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:39:56.340482 sshd[6349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:39:56.345595 systemd-logind[1968]: New session 20 of user core. Jan 24 00:39:56.353879 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 24 00:39:56.863357 sshd[6349]: pam_unix(sshd:session): session closed for user core Jan 24 00:39:56.867371 systemd[1]: sshd@19-172.31.23.37:22-4.153.228.146:37088.service: Deactivated successfully. Jan 24 00:39:56.869302 systemd[1]: session-20.scope: Deactivated successfully. Jan 24 00:39:56.870979 systemd-logind[1968]: Session 20 logged out. Waiting for processes to exit. Jan 24 00:39:56.872160 systemd-logind[1968]: Removed session 20. Jan 24 00:39:59.261144 kubelet[3194]: E0124 00:39:59.261043 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8fb494d-tmnz4" podUID="559b3199-5162-436c-ae6f-2ec7000948df" Jan 24 00:40:02.029327 systemd[1]: Started sshd@20-172.31.23.37:22-4.153.228.146:37102.service - OpenSSH per-connection server daemon (4.153.228.146:37102). Jan 24 00:40:02.742475 sshd[6364]: Accepted publickey for core from 4.153.228.146 port 37102 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:40:02.749274 sshd[6364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:40:02.759061 systemd-logind[1968]: New session 21 of user core. Jan 24 00:40:02.770680 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 24 00:40:03.263099 kubelet[3194]: E0124 00:40:03.261844 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-g8z2m" podUID="08028277-ca96-466b-b85d-b33e87d62943" Jan 24 00:40:03.325966 sshd[6364]: pam_unix(sshd:session): session closed for user core Jan 24 00:40:03.337123 systemd[1]: sshd@20-172.31.23.37:22-4.153.228.146:37102.service: Deactivated successfully. Jan 24 00:40:03.343531 systemd[1]: session-21.scope: Deactivated successfully. Jan 24 00:40:03.345620 systemd-logind[1968]: Session 21 logged out. Waiting for processes to exit. Jan 24 00:40:03.349332 systemd-logind[1968]: Removed session 21. Jan 24 00:40:04.263414 kubelet[3194]: E0124 00:40:04.262359 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c5f78b9cf-nf2hx" podUID="92126a9f-72bf-4007-b274-6c7bfe78315a" Jan 24 00:40:04.362042 systemd[1]: run-containerd-runc-k8s.io-667fba57270061577d6af21dabf31d62407a7d142686faac2ebdd10f110b9a82-runc.ovI3Fr.mount: Deactivated successfully. Jan 24 00:40:05.261088 kubelet[3194]: E0124 00:40:05.260112 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qnfl2" podUID="e1f50d23-3a90-4692-90b0-6d62e0594e46" Jan 24 00:40:05.264696 kubelet[3194]: E0124 00:40:05.261935 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8fb494d-phlb5" podUID="6c246b84-9265-4837-8997-3779f5365703" Jan 24 00:40:08.416105 systemd[1]: Started sshd@21-172.31.23.37:22-4.153.228.146:39590.service - OpenSSH per-connection server daemon (4.153.228.146:39590). Jan 24 00:40:08.964226 sshd[6400]: Accepted publickey for core from 4.153.228.146 port 39590 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:40:08.968334 sshd[6400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:40:08.976070 systemd-logind[1968]: New session 22 of user core. Jan 24 00:40:08.984824 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 24 00:40:09.755308 sshd[6400]: pam_unix(sshd:session): session closed for user core Jan 24 00:40:09.763186 systemd-logind[1968]: Session 22 logged out. Waiting for processes to exit. Jan 24 00:40:09.764778 systemd[1]: sshd@21-172.31.23.37:22-4.153.228.146:39590.service: Deactivated successfully. Jan 24 00:40:09.768134 systemd[1]: session-22.scope: Deactivated successfully. Jan 24 00:40:09.771295 systemd-logind[1968]: Removed session 22. Jan 24 00:40:10.263495 kubelet[3194]: E0124 00:40:10.262679 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764954c6fc-ns4t5" podUID="e59d88b0-80a3-4d3a-8f96-ae389146720c" Jan 24 00:40:13.258833 kubelet[3194]: E0124 00:40:13.258787 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8fb494d-tmnz4" podUID="559b3199-5162-436c-ae6f-2ec7000948df" Jan 24 00:40:17.257166 kubelet[3194]: E0124 00:40:17.256967 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c5f78b9cf-nf2hx" podUID="92126a9f-72bf-4007-b274-6c7bfe78315a" Jan 24 00:40:17.257634 kubelet[3194]: E0124 00:40:17.257355 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-g8z2m" podUID="08028277-ca96-466b-b85d-b33e87d62943" Jan 24 00:40:18.256023 kubelet[3194]: E0124 00:40:18.255981 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qnfl2" podUID="e1f50d23-3a90-4692-90b0-6d62e0594e46" Jan 24 00:40:19.255998 kubelet[3194]: E0124 00:40:19.255951 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8fb494d-phlb5" podUID="6c246b84-9265-4837-8997-3779f5365703" Jan 24 00:40:21.256548 containerd[1988]: time="2026-01-24T00:40:21.256365408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:40:21.556510 containerd[1988]: time="2026-01-24T00:40:21.556348763Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:40:21.558636 containerd[1988]: time="2026-01-24T00:40:21.558562394Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:40:21.558744 containerd[1988]: time="2026-01-24T00:40:21.558649157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:40:21.558864 kubelet[3194]: E0124 00:40:21.558804 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:40:21.558864 kubelet[3194]: E0124 00:40:21.558856 3194 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:40:21.559251 kubelet[3194]: E0124 00:40:21.558961 3194 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a61c7262178c49c787cf179bd2771f88,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-klmv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-764954c6fc-ns4t5_calico-system(e59d88b0-80a3-4d3a-8f96-ae389146720c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:40:21.561188 containerd[1988]: time="2026-01-24T00:40:21.561156365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:40:21.805099 containerd[1988]: time="2026-01-24T00:40:21.805053834Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:40:21.806422 containerd[1988]: time="2026-01-24T00:40:21.806350262Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:40:21.806583 containerd[1988]: time="2026-01-24T00:40:21.806450730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:40:21.807022 kubelet[3194]: E0124 00:40:21.806595 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:40:21.807022 kubelet[3194]: E0124 00:40:21.806679 3194 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:40:21.807022 kubelet[3194]: E0124 00:40:21.806782 3194 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-klmv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-764954c6fc-ns4t5_calico-system(e59d88b0-80a3-4d3a-8f96-ae389146720c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:40:21.808018 kubelet[3194]: E0124 00:40:21.807963 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764954c6fc-ns4t5" podUID="e59d88b0-80a3-4d3a-8f96-ae389146720c" Jan 24 00:40:23.259723 systemd[1]: cri-containerd-9242c20583de1bcd07c78186e96ebfa674c5f7e940dd125894787d6949a098bd.scope: Deactivated successfully. Jan 24 00:40:23.259979 systemd[1]: cri-containerd-9242c20583de1bcd07c78186e96ebfa674c5f7e940dd125894787d6949a098bd.scope: Consumed 5.311s CPU time, 26.9M memory peak, 0B memory swap peak. Jan 24 00:40:23.315792 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9242c20583de1bcd07c78186e96ebfa674c5f7e940dd125894787d6949a098bd-rootfs.mount: Deactivated successfully. Jan 24 00:40:23.372156 containerd[1988]: time="2026-01-24T00:40:23.334268763Z" level=info msg="shim disconnected" id=9242c20583de1bcd07c78186e96ebfa674c5f7e940dd125894787d6949a098bd namespace=k8s.io Jan 24 00:40:23.386945 containerd[1988]: time="2026-01-24T00:40:23.386885754Z" level=warning msg="cleaning up after shim disconnected" id=9242c20583de1bcd07c78186e96ebfa674c5f7e940dd125894787d6949a098bd namespace=k8s.io Jan 24 00:40:23.386945 containerd[1988]: time="2026-01-24T00:40:23.386927878Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:40:24.010953 kubelet[3194]: I0124 00:40:24.010870 3194 scope.go:117] "RemoveContainer" containerID="9242c20583de1bcd07c78186e96ebfa674c5f7e940dd125894787d6949a098bd" Jan 24 00:40:24.030530 containerd[1988]: time="2026-01-24T00:40:24.030486650Z" level=info msg="CreateContainer within sandbox \"363d757182b6920e1435330f22df484e4ebb1d114ad594aacafb782dda771362\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 24 00:40:24.085545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount384286252.mount: Deactivated successfully. Jan 24 00:40:24.087585 containerd[1988]: time="2026-01-24T00:40:24.087543256Z" level=info msg="CreateContainer within sandbox \"363d757182b6920e1435330f22df484e4ebb1d114ad594aacafb782dda771362\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"b8310b5b2c2101cbe9702f5cf37708c6e48d381b812821c8449e5899ff9d34ef\"" Jan 24 00:40:24.088173 containerd[1988]: time="2026-01-24T00:40:24.088142865Z" level=info msg="StartContainer for \"b8310b5b2c2101cbe9702f5cf37708c6e48d381b812821c8449e5899ff9d34ef\"" Jan 24 00:40:24.146627 systemd[1]: Started cri-containerd-b8310b5b2c2101cbe9702f5cf37708c6e48d381b812821c8449e5899ff9d34ef.scope - libcontainer container b8310b5b2c2101cbe9702f5cf37708c6e48d381b812821c8449e5899ff9d34ef. Jan 24 00:40:24.233682 containerd[1988]: time="2026-01-24T00:40:24.233636434Z" level=info msg="StartContainer for \"b8310b5b2c2101cbe9702f5cf37708c6e48d381b812821c8449e5899ff9d34ef\" returns successfully" Jan 24 00:40:24.709458 systemd[1]: cri-containerd-6a61826e312e1716b9b86b9edad7dd10e6486ee71f69d9d6e67ef867895bd393.scope: Deactivated successfully. Jan 24 00:40:24.709704 systemd[1]: cri-containerd-6a61826e312e1716b9b86b9edad7dd10e6486ee71f69d9d6e67ef867895bd393.scope: Consumed 10.979s CPU time. Jan 24 00:40:24.742942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a61826e312e1716b9b86b9edad7dd10e6486ee71f69d9d6e67ef867895bd393-rootfs.mount: Deactivated successfully. Jan 24 00:40:24.754723 containerd[1988]: time="2026-01-24T00:40:24.754641794Z" level=info msg="shim disconnected" id=6a61826e312e1716b9b86b9edad7dd10e6486ee71f69d9d6e67ef867895bd393 namespace=k8s.io Jan 24 00:40:24.754723 containerd[1988]: time="2026-01-24T00:40:24.754717054Z" level=warning msg="cleaning up after shim disconnected" id=6a61826e312e1716b9b86b9edad7dd10e6486ee71f69d9d6e67ef867895bd393 namespace=k8s.io Jan 24 00:40:24.754723 containerd[1988]: time="2026-01-24T00:40:24.754726648Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:40:25.015627 kubelet[3194]: I0124 00:40:25.015507 3194 scope.go:117] "RemoveContainer" containerID="6a61826e312e1716b9b86b9edad7dd10e6486ee71f69d9d6e67ef867895bd393" Jan 24 00:40:25.032418 containerd[1988]: time="2026-01-24T00:40:25.032354529Z" level=info msg="CreateContainer within sandbox \"e4cd27ff3925cd0003f492690cac16b8dcc0ecbb67ef02647d4a01d5eb45b56e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 24 00:40:25.056090 containerd[1988]: time="2026-01-24T00:40:25.055654707Z" level=info msg="CreateContainer within sandbox \"e4cd27ff3925cd0003f492690cac16b8dcc0ecbb67ef02647d4a01d5eb45b56e\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"6ea8d2695060c75209a77d55aff8d7d37ad21efe705ab62f267cd0356c801461\"" Jan 24 00:40:25.057746 containerd[1988]: time="2026-01-24T00:40:25.056435039Z" level=info msg="StartContainer for \"6ea8d2695060c75209a77d55aff8d7d37ad21efe705ab62f267cd0356c801461\"" Jan 24 00:40:25.098598 systemd[1]: Started cri-containerd-6ea8d2695060c75209a77d55aff8d7d37ad21efe705ab62f267cd0356c801461.scope - libcontainer container 6ea8d2695060c75209a77d55aff8d7d37ad21efe705ab62f267cd0356c801461. Jan 24 00:40:25.133018 containerd[1988]: time="2026-01-24T00:40:25.132958410Z" level=info msg="StartContainer for \"6ea8d2695060c75209a77d55aff8d7d37ad21efe705ab62f267cd0356c801461\" returns successfully" Jan 24 00:40:27.256688 containerd[1988]: time="2026-01-24T00:40:27.256366396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:40:27.551693 containerd[1988]: time="2026-01-24T00:40:27.551572235Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:40:27.553852 containerd[1988]: time="2026-01-24T00:40:27.553683848Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:40:27.553852 containerd[1988]: time="2026-01-24T00:40:27.553782708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:40:27.554393 kubelet[3194]: E0124 00:40:27.554175 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:40:27.554393 kubelet[3194]: E0124 00:40:27.554218 3194 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:40:27.554844 kubelet[3194]: E0124 00:40:27.554766 3194 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mfz58,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d8fb494d-tmnz4_calico-apiserver(559b3199-5162-436c-ae6f-2ec7000948df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:40:27.555960 kubelet[3194]: E0124 00:40:27.555914 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8fb494d-tmnz4" podUID="559b3199-5162-436c-ae6f-2ec7000948df" Jan 24 00:40:29.119205 systemd[1]: cri-containerd-db7baade39aa8b062cd6689b241d9392b393c1df7124038e6aa433ca8c9a4bef.scope: Deactivated successfully. Jan 24 00:40:29.119987 systemd[1]: cri-containerd-db7baade39aa8b062cd6689b241d9392b393c1df7124038e6aa433ca8c9a4bef.scope: Consumed 2.353s CPU time, 16.7M memory peak, 0B memory swap peak. Jan 24 00:40:29.148287 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db7baade39aa8b062cd6689b241d9392b393c1df7124038e6aa433ca8c9a4bef-rootfs.mount: Deactivated successfully. Jan 24 00:40:29.173133 containerd[1988]: time="2026-01-24T00:40:29.173074773Z" level=info msg="shim disconnected" id=db7baade39aa8b062cd6689b241d9392b393c1df7124038e6aa433ca8c9a4bef namespace=k8s.io Jan 24 00:40:29.173133 containerd[1988]: time="2026-01-24T00:40:29.173126183Z" level=warning msg="cleaning up after shim disconnected" id=db7baade39aa8b062cd6689b241d9392b393c1df7124038e6aa433ca8c9a4bef namespace=k8s.io Jan 24 00:40:29.173133 containerd[1988]: time="2026-01-24T00:40:29.173134747Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:40:30.029876 kubelet[3194]: I0124 00:40:30.029394 3194 scope.go:117] "RemoveContainer" containerID="db7baade39aa8b062cd6689b241d9392b393c1df7124038e6aa433ca8c9a4bef" Jan 24 00:40:30.031714 containerd[1988]: time="2026-01-24T00:40:30.031678081Z" level=info msg="CreateContainer within sandbox \"506fc4f3654f7ee79776b624e1518e5dd78ab37f39257947760b4b1b94327d96\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 24 00:40:30.052989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1602353249.mount: Deactivated successfully. Jan 24 00:40:30.057801 containerd[1988]: time="2026-01-24T00:40:30.057754313Z" level=info msg="CreateContainer within sandbox \"506fc4f3654f7ee79776b624e1518e5dd78ab37f39257947760b4b1b94327d96\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d420e73ceb65c70722da51a14a20e665015631f73573d3c408b7f8c358802b71\"" Jan 24 00:40:30.058356 containerd[1988]: time="2026-01-24T00:40:30.058325933Z" level=info msg="StartContainer for \"d420e73ceb65c70722da51a14a20e665015631f73573d3c408b7f8c358802b71\"" Jan 24 00:40:30.103632 systemd[1]: Started cri-containerd-d420e73ceb65c70722da51a14a20e665015631f73573d3c408b7f8c358802b71.scope - libcontainer container d420e73ceb65c70722da51a14a20e665015631f73573d3c408b7f8c358802b71. Jan 24 00:40:30.151433 systemd[1]: run-containerd-runc-k8s.io-d420e73ceb65c70722da51a14a20e665015631f73573d3c408b7f8c358802b71-runc.A4KACE.mount: Deactivated successfully. Jan 24 00:40:30.160762 containerd[1988]: time="2026-01-24T00:40:30.160671673Z" level=info msg="StartContainer for \"d420e73ceb65c70722da51a14a20e665015631f73573d3c408b7f8c358802b71\" returns successfully" Jan 24 00:40:30.257127 kubelet[3194]: E0124 00:40:30.257085 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qnfl2" podUID="e1f50d23-3a90-4692-90b0-6d62e0594e46" Jan 24 00:40:30.258149 kubelet[3194]: E0124 00:40:30.257619 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-g8z2m" podUID="08028277-ca96-466b-b85d-b33e87d62943" Jan 24 00:40:32.156291 kubelet[3194]: E0124 00:40:32.141107 3194 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-37?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 24 00:40:32.257279 containerd[1988]: time="2026-01-24T00:40:32.256977126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:40:32.517248 containerd[1988]: time="2026-01-24T00:40:32.517115612Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:40:32.519556 containerd[1988]: time="2026-01-24T00:40:32.519392296Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:40:32.519556 containerd[1988]: time="2026-01-24T00:40:32.519496525Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:40:32.519938 kubelet[3194]: E0124 00:40:32.519884 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:40:32.520037 kubelet[3194]: E0124 00:40:32.519938 3194 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:40:32.520167 kubelet[3194]: E0124 00:40:32.520116 3194 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6c56f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6c5f78b9cf-nf2hx_calico-system(92126a9f-72bf-4007-b274-6c7bfe78315a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:40:32.521466 kubelet[3194]: E0124 00:40:32.521429 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c5f78b9cf-nf2hx" podUID="92126a9f-72bf-4007-b274-6c7bfe78315a" Jan 24 00:40:34.260248 kubelet[3194]: E0124 00:40:34.260156 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-764954c6fc-ns4t5" podUID="e59d88b0-80a3-4d3a-8f96-ae389146720c" Jan 24 00:40:34.260822 containerd[1988]: time="2026-01-24T00:40:34.260425394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:40:34.285344 systemd[1]: run-containerd-runc-k8s.io-667fba57270061577d6af21dabf31d62407a7d142686faac2ebdd10f110b9a82-runc.mPKjbQ.mount: Deactivated successfully. Jan 24 00:40:34.646831 containerd[1988]: time="2026-01-24T00:40:34.646683005Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:40:34.649274 containerd[1988]: time="2026-01-24T00:40:34.649212450Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:40:34.649469 containerd[1988]: time="2026-01-24T00:40:34.649246609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:40:34.649551 kubelet[3194]: E0124 00:40:34.649483 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:40:34.649551 kubelet[3194]: E0124 00:40:34.649531 3194 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:40:34.649713 kubelet[3194]: E0124 00:40:34.649666 3194 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68sfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d8fb494d-phlb5_calico-apiserver(6c246b84-9265-4837-8997-3779f5365703): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:40:34.650884 kubelet[3194]: E0124 00:40:34.650835 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d8fb494d-phlb5" podUID="6c246b84-9265-4837-8997-3779f5365703"