Jan 24 00:31:04.913131 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:31:04.913158 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:31:04.913170 kernel: BIOS-provided physical RAM map: Jan 24 00:31:04.913177 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 24 00:31:04.913184 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jan 24 00:31:04.913190 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Jan 24 00:31:04.913198 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Jan 24 00:31:04.913205 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 24 00:31:04.913212 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 24 00:31:04.913221 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 24 00:31:04.913228 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 24 00:31:04.913234 kernel: NX (Execute Disable) protection: active Jan 24 00:31:04.913241 kernel: APIC: Static calls initialized Jan 24 00:31:04.913248 kernel: efi: EFI v2.7 by EDK II Jan 24 00:31:04.913257 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Jan 24 00:31:04.913267 kernel: SMBIOS 2.7 present. Jan 24 00:31:04.913275 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 24 00:31:04.913282 kernel: Hypervisor detected: KVM Jan 24 00:31:04.913290 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 24 00:31:04.913297 kernel: kvm-clock: using sched offset of 4193478436 cycles Jan 24 00:31:04.913305 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 24 00:31:04.913313 kernel: tsc: Detected 2499.998 MHz processor Jan 24 00:31:04.913321 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:31:04.913329 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:31:04.913337 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 24 00:31:04.913348 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 24 00:31:04.913355 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:31:04.913363 kernel: Using GB pages for direct mapping Jan 24 00:31:04.913371 kernel: Secure boot disabled Jan 24 00:31:04.913378 kernel: ACPI: Early table checksum verification disabled Jan 24 00:31:04.913386 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jan 24 00:31:04.913394 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jan 24 00:31:04.913401 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 24 00:31:04.913409 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 24 00:31:04.913419 kernel: ACPI: FACS 0x00000000789D0000 000040 Jan 24 00:31:04.913427 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 24 00:31:04.913435 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 24 00:31:04.913442 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 24 00:31:04.913450 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 24 00:31:04.913458 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 24 00:31:04.914349 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 24 00:31:04.914364 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 24 00:31:04.914373 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jan 24 00:31:04.914381 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jan 24 00:31:04.914390 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jan 24 00:31:04.914398 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jan 24 00:31:04.914406 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jan 24 00:31:04.914417 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jan 24 00:31:04.914440 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jan 24 00:31:04.914448 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jan 24 00:31:04.914457 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jan 24 00:31:04.914528 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jan 24 00:31:04.914537 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jan 24 00:31:04.914545 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jan 24 00:31:04.914553 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 24 00:31:04.914565 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 24 00:31:04.914573 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 24 00:31:04.914584 kernel: NUMA: Initialized distance table, cnt=1 Jan 24 00:31:04.914593 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Jan 24 00:31:04.914601 kernel: Zone ranges: Jan 24 00:31:04.914609 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:31:04.914618 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jan 24 00:31:04.914626 kernel: Normal empty Jan 24 00:31:04.914634 kernel: Movable zone start for each node Jan 24 00:31:04.914642 kernel: Early memory node ranges Jan 24 00:31:04.914651 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 24 00:31:04.914659 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jan 24 00:31:04.914670 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jan 24 00:31:04.914678 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jan 24 00:31:04.914686 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:31:04.914694 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 24 00:31:04.914703 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 24 00:31:04.914711 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jan 24 00:31:04.914720 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 24 00:31:04.914728 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 24 00:31:04.914736 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 24 00:31:04.914747 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 24 00:31:04.914755 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:31:04.914763 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 24 00:31:04.914771 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 24 00:31:04.914779 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:31:04.914788 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 24 00:31:04.914796 kernel: TSC deadline timer available Jan 24 00:31:04.914804 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 24 00:31:04.914813 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 24 00:31:04.914824 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jan 24 00:31:04.914832 kernel: Booting paravirtualized kernel on KVM Jan 24 00:31:04.914851 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:31:04.914864 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 24 00:31:04.914876 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 24 00:31:04.914889 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 24 00:31:04.914900 kernel: pcpu-alloc: [0] 0 1 Jan 24 00:31:04.914912 kernel: kvm-guest: PV spinlocks enabled Jan 24 00:31:04.914925 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:31:04.914937 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:31:04.914946 kernel: random: crng init done Jan 24 00:31:04.914955 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 24 00:31:04.914963 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 24 00:31:04.914971 kernel: Fallback order for Node 0: 0 Jan 24 00:31:04.914980 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Jan 24 00:31:04.914988 kernel: Policy zone: DMA32 Jan 24 00:31:04.914997 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:31:04.915008 kernel: Memory: 1874628K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 162916K reserved, 0K cma-reserved) Jan 24 00:31:04.915016 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 24 00:31:04.915025 kernel: Kernel/User page tables isolation: enabled Jan 24 00:31:04.915033 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:31:04.915041 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:31:04.915049 kernel: Dynamic Preempt: voluntary Jan 24 00:31:04.915058 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:31:04.915067 kernel: rcu: RCU event tracing is enabled. Jan 24 00:31:04.915076 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 24 00:31:04.915086 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:31:04.915095 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:31:04.915105 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:31:04.915114 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:31:04.915122 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 24 00:31:04.915130 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 24 00:31:04.915139 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:31:04.915167 kernel: Console: colour dummy device 80x25 Jan 24 00:31:04.915176 kernel: printk: console [tty0] enabled Jan 24 00:31:04.915184 kernel: printk: console [ttyS0] enabled Jan 24 00:31:04.915193 kernel: ACPI: Core revision 20230628 Jan 24 00:31:04.915202 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 24 00:31:04.915213 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:31:04.915231 kernel: x2apic enabled Jan 24 00:31:04.915240 kernel: APIC: Switched APIC routing to: physical x2apic Jan 24 00:31:04.915250 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 24 00:31:04.915259 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 24 00:31:04.915270 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 24 00:31:04.915279 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 24 00:31:04.915288 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:31:04.915296 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 00:31:04.915305 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:31:04.915314 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 24 00:31:04.915330 kernel: RETBleed: Vulnerable Jan 24 00:31:04.915340 kernel: Speculative Store Bypass: Vulnerable Jan 24 00:31:04.915349 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:31:04.915357 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:31:04.915368 kernel: GDS: Unknown: Dependent on hypervisor status Jan 24 00:31:04.915380 kernel: active return thunk: its_return_thunk Jan 24 00:31:04.915388 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 24 00:31:04.915397 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:31:04.915406 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:31:04.915415 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:31:04.915423 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 24 00:31:04.915432 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 24 00:31:04.915441 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 24 00:31:04.915449 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 24 00:31:04.915458 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 24 00:31:04.915480 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 24 00:31:04.915494 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:31:04.915504 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 24 00:31:04.915513 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 24 00:31:04.915521 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 24 00:31:04.915530 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 24 00:31:04.915539 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 24 00:31:04.915547 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 24 00:31:04.915556 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 24 00:31:04.915565 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:31:04.915574 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:31:04.915585 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:31:04.915594 kernel: landlock: Up and running. Jan 24 00:31:04.915603 kernel: SELinux: Initializing. Jan 24 00:31:04.915611 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 24 00:31:04.915620 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 24 00:31:04.915629 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 24 00:31:04.915639 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:31:04.915648 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:31:04.915657 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:31:04.915666 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 24 00:31:04.915678 kernel: signal: max sigframe size: 3632 Jan 24 00:31:04.915687 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:31:04.915696 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:31:04.915705 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 00:31:04.915714 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:31:04.915722 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:31:04.915731 kernel: .... node #0, CPUs: #1 Jan 24 00:31:04.915740 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 24 00:31:04.915750 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 24 00:31:04.915761 kernel: smp: Brought up 1 node, 2 CPUs Jan 24 00:31:04.915770 kernel: smpboot: Max logical packages: 1 Jan 24 00:31:04.915779 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 24 00:31:04.915788 kernel: devtmpfs: initialized Jan 24 00:31:04.915796 kernel: x86/mm: Memory block size: 128MB Jan 24 00:31:04.915805 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jan 24 00:31:04.915814 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:31:04.915823 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 24 00:31:04.915840 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:31:04.918675 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:31:04.918696 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:31:04.918706 kernel: audit: type=2000 audit(1769214664.452:1): state=initialized audit_enabled=0 res=1 Jan 24 00:31:04.918716 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:31:04.918725 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:31:04.918734 kernel: cpuidle: using governor menu Jan 24 00:31:04.918743 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:31:04.918752 kernel: dca service started, version 1.12.1 Jan 24 00:31:04.918761 kernel: PCI: Using configuration type 1 for base access Jan 24 00:31:04.918777 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:31:04.918786 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:31:04.918795 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:31:04.918804 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:31:04.918812 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:31:04.918821 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:31:04.918830 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:31:04.918932 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:31:04.918944 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 24 00:31:04.918956 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:31:04.918965 kernel: ACPI: Interpreter enabled Jan 24 00:31:04.918974 kernel: ACPI: PM: (supports S0 S5) Jan 24 00:31:04.918983 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:31:04.918992 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:31:04.919002 kernel: PCI: Using E820 reservations for host bridge windows Jan 24 00:31:04.919011 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 24 00:31:04.919020 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 24 00:31:04.919186 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 24 00:31:04.919292 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 24 00:31:04.919384 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 24 00:31:04.919395 kernel: acpiphp: Slot [3] registered Jan 24 00:31:04.919405 kernel: acpiphp: Slot [4] registered Jan 24 00:31:04.919414 kernel: acpiphp: Slot [5] registered Jan 24 00:31:04.919422 kernel: acpiphp: Slot [6] registered Jan 24 00:31:04.919431 kernel: acpiphp: Slot [7] registered Jan 24 00:31:04.919443 kernel: acpiphp: Slot [8] registered Jan 24 00:31:04.919451 kernel: acpiphp: Slot [9] registered Jan 24 00:31:04.919461 kernel: acpiphp: Slot [10] registered Jan 24 00:31:04.920548 kernel: acpiphp: Slot [11] registered Jan 24 00:31:04.920559 kernel: acpiphp: Slot [12] registered Jan 24 00:31:04.920568 kernel: acpiphp: Slot [13] registered Jan 24 00:31:04.920577 kernel: acpiphp: Slot [14] registered Jan 24 00:31:04.920586 kernel: acpiphp: Slot [15] registered Jan 24 00:31:04.920595 kernel: acpiphp: Slot [16] registered Jan 24 00:31:04.920605 kernel: acpiphp: Slot [17] registered Jan 24 00:31:04.920618 kernel: acpiphp: Slot [18] registered Jan 24 00:31:04.920627 kernel: acpiphp: Slot [19] registered Jan 24 00:31:04.920636 kernel: acpiphp: Slot [20] registered Jan 24 00:31:04.920645 kernel: acpiphp: Slot [21] registered Jan 24 00:31:04.920654 kernel: acpiphp: Slot [22] registered Jan 24 00:31:04.920663 kernel: acpiphp: Slot [23] registered Jan 24 00:31:04.920672 kernel: acpiphp: Slot [24] registered Jan 24 00:31:04.920681 kernel: acpiphp: Slot [25] registered Jan 24 00:31:04.920690 kernel: acpiphp: Slot [26] registered Jan 24 00:31:04.920701 kernel: acpiphp: Slot [27] registered Jan 24 00:31:04.920710 kernel: acpiphp: Slot [28] registered Jan 24 00:31:04.920719 kernel: acpiphp: Slot [29] registered Jan 24 00:31:04.920727 kernel: acpiphp: Slot [30] registered Jan 24 00:31:04.920736 kernel: acpiphp: Slot [31] registered Jan 24 00:31:04.920745 kernel: PCI host bridge to bus 0000:00 Jan 24 00:31:04.920877 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 24 00:31:04.920967 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 24 00:31:04.921056 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 24 00:31:04.921138 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 24 00:31:04.921220 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jan 24 00:31:04.921302 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 24 00:31:04.921409 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 24 00:31:04.922740 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 24 00:31:04.922874 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 24 00:31:04.922994 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 24 00:31:04.923089 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 24 00:31:04.923181 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 24 00:31:04.923272 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 24 00:31:04.923363 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 24 00:31:04.923453 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 24 00:31:04.923558 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 24 00:31:04.923658 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 24 00:31:04.923749 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Jan 24 00:31:04.923840 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 24 00:31:04.923932 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Jan 24 00:31:04.924022 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 24 00:31:04.924118 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 24 00:31:04.924213 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Jan 24 00:31:04.924315 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 24 00:31:04.924405 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Jan 24 00:31:04.924417 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 24 00:31:04.924427 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 24 00:31:04.924436 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 24 00:31:04.924445 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 24 00:31:04.924454 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 24 00:31:04.927049 kernel: iommu: Default domain type: Translated Jan 24 00:31:04.927066 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:31:04.927076 kernel: efivars: Registered efivars operations Jan 24 00:31:04.927086 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:31:04.927095 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 24 00:31:04.927105 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jan 24 00:31:04.927114 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jan 24 00:31:04.927254 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 24 00:31:04.927349 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 24 00:31:04.927449 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 24 00:31:04.927461 kernel: vgaarb: loaded Jan 24 00:31:04.927559 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 24 00:31:04.927569 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 24 00:31:04.927578 kernel: clocksource: Switched to clocksource kvm-clock Jan 24 00:31:04.927587 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:31:04.927596 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:31:04.927605 kernel: pnp: PnP ACPI init Jan 24 00:31:04.927618 kernel: pnp: PnP ACPI: found 5 devices Jan 24 00:31:04.927628 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:31:04.927637 kernel: NET: Registered PF_INET protocol family Jan 24 00:31:04.927646 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 24 00:31:04.927655 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 24 00:31:04.927664 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:31:04.927673 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 24 00:31:04.927682 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 24 00:31:04.927691 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 24 00:31:04.927702 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 24 00:31:04.927711 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 24 00:31:04.927720 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:31:04.927729 kernel: NET: Registered PF_XDP protocol family Jan 24 00:31:04.927823 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 24 00:31:04.927906 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 24 00:31:04.927989 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 24 00:31:04.928071 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 24 00:31:04.928152 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jan 24 00:31:04.928251 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 24 00:31:04.928264 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:31:04.928273 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 24 00:31:04.928282 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 24 00:31:04.928291 kernel: clocksource: Switched to clocksource tsc Jan 24 00:31:04.928301 kernel: Initialise system trusted keyrings Jan 24 00:31:04.928309 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 24 00:31:04.928318 kernel: Key type asymmetric registered Jan 24 00:31:04.928330 kernel: Asymmetric key parser 'x509' registered Jan 24 00:31:04.928339 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:31:04.928348 kernel: io scheduler mq-deadline registered Jan 24 00:31:04.928357 kernel: io scheduler kyber registered Jan 24 00:31:04.928365 kernel: io scheduler bfq registered Jan 24 00:31:04.928374 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:31:04.928384 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:31:04.928393 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:31:04.928402 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 24 00:31:04.928414 kernel: i8042: Warning: Keylock active Jan 24 00:31:04.928422 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 24 00:31:04.928432 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 24 00:31:04.928544 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 24 00:31:04.928633 kernel: rtc_cmos 00:00: registered as rtc0 Jan 24 00:31:04.928719 kernel: rtc_cmos 00:00: setting system clock to 2026-01-24T00:31:04 UTC (1769214664) Jan 24 00:31:04.928806 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 24 00:31:04.928818 kernel: intel_pstate: CPU model not supported Jan 24 00:31:04.928830 kernel: efifb: probing for efifb Jan 24 00:31:04.928840 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Jan 24 00:31:04.928849 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jan 24 00:31:04.928858 kernel: efifb: scrolling: redraw Jan 24 00:31:04.928867 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 24 00:31:04.928876 kernel: Console: switching to colour frame buffer device 100x37 Jan 24 00:31:04.928885 kernel: fb0: EFI VGA frame buffer device Jan 24 00:31:04.928894 kernel: pstore: Using crash dump compression: deflate Jan 24 00:31:04.928903 kernel: pstore: Registered efi_pstore as persistent store backend Jan 24 00:31:04.928914 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:31:04.928923 kernel: Segment Routing with IPv6 Jan 24 00:31:04.928932 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:31:04.928941 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:31:04.928949 kernel: Key type dns_resolver registered Jan 24 00:31:04.928958 kernel: IPI shorthand broadcast: enabled Jan 24 00:31:04.928987 kernel: sched_clock: Marking stable (463001950, 125991946)->(681949063, -92955167) Jan 24 00:31:04.928998 kernel: registered taskstats version 1 Jan 24 00:31:04.929008 kernel: Loading compiled-in X.509 certificates Jan 24 00:31:04.929020 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:31:04.929029 kernel: Key type .fscrypt registered Jan 24 00:31:04.929038 kernel: Key type fscrypt-provisioning registered Jan 24 00:31:04.929047 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:31:04.929056 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:31:04.929066 kernel: ima: No architecture policies found Jan 24 00:31:04.929075 kernel: clk: Disabling unused clocks Jan 24 00:31:04.929085 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:31:04.929094 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:31:04.929107 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:31:04.929117 kernel: Run /init as init process Jan 24 00:31:04.929126 kernel: with arguments: Jan 24 00:31:04.929136 kernel: /init Jan 24 00:31:04.929145 kernel: with environment: Jan 24 00:31:04.929154 kernel: HOME=/ Jan 24 00:31:04.929163 kernel: TERM=linux Jan 24 00:31:04.929175 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:31:04.929192 systemd[1]: Detected virtualization amazon. Jan 24 00:31:04.929202 systemd[1]: Detected architecture x86-64. Jan 24 00:31:04.929211 systemd[1]: Running in initrd. Jan 24 00:31:04.929221 systemd[1]: No hostname configured, using default hostname. Jan 24 00:31:04.929230 systemd[1]: Hostname set to . Jan 24 00:31:04.929240 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:31:04.929250 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:31:04.929259 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:31:04.929271 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:31:04.929283 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:31:04.929292 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:31:04.929302 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:31:04.929315 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:31:04.929328 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:31:04.929338 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:31:04.929348 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:31:04.929358 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:31:04.929368 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:31:04.929377 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:31:04.929387 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:31:04.929399 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:31:04.929409 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:31:04.929419 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:31:04.929429 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:31:04.929439 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:31:04.929448 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:31:04.929458 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:31:04.931505 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:31:04.931522 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:31:04.931537 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:31:04.931547 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:31:04.931557 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:31:04.931567 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:31:04.931577 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:31:04.931587 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:31:04.931597 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:31:04.931607 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:31:04.931649 systemd-journald[179]: Collecting audit messages is disabled. Jan 24 00:31:04.931674 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:31:04.931684 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:31:04.931698 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:31:04.931708 systemd-journald[179]: Journal started Jan 24 00:31:04.931730 systemd-journald[179]: Runtime Journal (/run/log/journal/ec248ece9b5fec2b93ed7e2c8b0ce070) is 4.7M, max 38.2M, 33.4M free. Jan 24 00:31:04.913097 systemd-modules-load[180]: Inserted module 'overlay' Jan 24 00:31:04.934541 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:31:04.944074 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:31:04.945770 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:31:04.949132 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:31:04.951077 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:31:04.955438 systemd-modules-load[180]: Inserted module 'br_netfilter' Jan 24 00:31:04.956584 kernel: Bridge firewalling registered Jan 24 00:31:04.958694 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:31:04.962614 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:31:04.963984 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:31:04.971605 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:31:04.974982 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:31:04.980054 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:31:04.986666 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:31:04.987399 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:31:04.989424 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:31:04.992251 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:31:04.999693 dracut-cmdline[210]: dracut-dracut-053 Jan 24 00:31:05.005029 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:31:05.008192 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 24 00:31:05.033416 systemd-resolved[217]: Positive Trust Anchors: Jan 24 00:31:05.033432 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:31:05.033482 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:31:05.039749 systemd-resolved[217]: Defaulting to hostname 'linux'. Jan 24 00:31:05.041239 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:31:05.041682 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:31:05.086510 kernel: SCSI subsystem initialized Jan 24 00:31:05.095505 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:31:05.107503 kernel: iscsi: registered transport (tcp) Jan 24 00:31:05.129521 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:31:05.129592 kernel: QLogic iSCSI HBA Driver Jan 24 00:31:05.174501 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:31:05.182668 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:31:05.208833 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:31:05.208911 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:31:05.208935 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:31:05.252499 kernel: raid6: avx512x4 gen() 18145 MB/s Jan 24 00:31:05.270496 kernel: raid6: avx512x2 gen() 18077 MB/s Jan 24 00:31:05.288499 kernel: raid6: avx512x1 gen() 18165 MB/s Jan 24 00:31:05.306494 kernel: raid6: avx2x4 gen() 17967 MB/s Jan 24 00:31:05.324497 kernel: raid6: avx2x2 gen() 17954 MB/s Jan 24 00:31:05.342737 kernel: raid6: avx2x1 gen() 13845 MB/s Jan 24 00:31:05.342800 kernel: raid6: using algorithm avx512x1 gen() 18165 MB/s Jan 24 00:31:05.361676 kernel: raid6: .... xor() 21410 MB/s, rmw enabled Jan 24 00:31:05.361744 kernel: raid6: using avx512x2 recovery algorithm Jan 24 00:31:05.383505 kernel: xor: automatically using best checksumming function avx Jan 24 00:31:05.543506 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:31:05.554266 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:31:05.559696 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:31:05.581413 systemd-udevd[397]: Using default interface naming scheme 'v255'. Jan 24 00:31:05.586650 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:31:05.597657 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:31:05.616080 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Jan 24 00:31:05.648045 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:31:05.652759 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:31:05.707137 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:31:05.718960 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:31:05.744202 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:31:05.747058 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:31:05.748931 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:31:05.749412 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:31:05.754700 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:31:05.781517 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:31:05.816491 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:31:05.825131 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:31:05.826261 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:31:05.827145 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:31:05.829526 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:31:05.829744 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:31:05.830998 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:31:05.838990 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:31:05.858275 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:31:05.874695 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:31:05.874736 kernel: AES CTR mode by8 optimization enabled Jan 24 00:31:05.874760 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 24 00:31:05.875115 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 24 00:31:05.875310 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 24 00:31:05.858406 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:31:05.884020 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 24 00:31:05.884298 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 24 00:31:05.884327 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:96:fb:ee:25:7f Jan 24 00:31:05.887441 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:31:05.900544 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 24 00:31:05.910145 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:31:05.917668 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 24 00:31:05.917704 kernel: GPT:9289727 != 33554431 Jan 24 00:31:05.917722 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 24 00:31:05.917742 kernel: GPT:9289727 != 33554431 Jan 24 00:31:05.917760 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:31:05.917780 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 24 00:31:05.920072 (udev-worker)[449]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:31:05.926800 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:31:05.948901 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:31:06.007843 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (448) Jan 24 00:31:06.039769 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 24 00:31:06.045488 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/nvme0n1p3 scanned by (udev-worker) (451) Jan 24 00:31:06.088706 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 24 00:31:06.104775 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 24 00:31:06.105332 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 24 00:31:06.112804 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 24 00:31:06.119660 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:31:06.126398 disk-uuid[632]: Primary Header is updated. Jan 24 00:31:06.126398 disk-uuid[632]: Secondary Entries is updated. Jan 24 00:31:06.126398 disk-uuid[632]: Secondary Header is updated. Jan 24 00:31:06.132490 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 24 00:31:06.139080 kernel: GPT:disk_guids don't match. Jan 24 00:31:06.139145 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 24 00:31:06.139160 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 24 00:31:06.149517 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 24 00:31:07.147543 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 24 00:31:07.148061 disk-uuid[633]: The operation has completed successfully. Jan 24 00:31:07.283013 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:31:07.283150 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:31:07.298682 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:31:07.303834 sh[978]: Success Jan 24 00:31:07.324488 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 24 00:31:07.413906 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:31:07.421567 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:31:07.423610 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:31:07.464853 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:31:07.464916 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:31:07.467147 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:31:07.470071 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:31:07.470140 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:31:07.570502 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 24 00:31:07.596568 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:31:07.597599 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:31:07.601627 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:31:07.604308 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:31:07.628326 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:31:07.628402 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:31:07.628426 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 24 00:31:07.646550 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 24 00:31:07.660168 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 00:31:07.662643 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:31:07.669618 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:31:07.680717 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:31:07.710236 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:31:07.716700 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:31:07.739512 systemd-networkd[1170]: lo: Link UP Jan 24 00:31:07.739524 systemd-networkd[1170]: lo: Gained carrier Jan 24 00:31:07.741244 systemd-networkd[1170]: Enumeration completed Jan 24 00:31:07.741733 systemd-networkd[1170]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:31:07.741738 systemd-networkd[1170]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:31:07.741891 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:31:07.742600 systemd[1]: Reached target network.target - Network. Jan 24 00:31:07.745137 systemd-networkd[1170]: eth0: Link UP Jan 24 00:31:07.745143 systemd-networkd[1170]: eth0: Gained carrier Jan 24 00:31:07.745155 systemd-networkd[1170]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:31:07.764564 systemd-networkd[1170]: eth0: DHCPv4 address 172.31.16.136/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 24 00:31:08.106283 ignition[1125]: Ignition 2.19.0 Jan 24 00:31:08.106294 ignition[1125]: Stage: fetch-offline Jan 24 00:31:08.106509 ignition[1125]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:31:08.106518 ignition[1125]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:31:08.106914 ignition[1125]: Ignition finished successfully Jan 24 00:31:08.109903 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:31:08.113692 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 24 00:31:08.129251 ignition[1179]: Ignition 2.19.0 Jan 24 00:31:08.129264 ignition[1179]: Stage: fetch Jan 24 00:31:08.129629 ignition[1179]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:31:08.129639 ignition[1179]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:31:08.129723 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:31:08.160726 ignition[1179]: PUT result: OK Jan 24 00:31:08.164654 ignition[1179]: parsed url from cmdline: "" Jan 24 00:31:08.164666 ignition[1179]: no config URL provided Jan 24 00:31:08.164677 ignition[1179]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:31:08.164693 ignition[1179]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:31:08.164717 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:31:08.166557 ignition[1179]: PUT result: OK Jan 24 00:31:08.166619 ignition[1179]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 24 00:31:08.168687 ignition[1179]: GET result: OK Jan 24 00:31:08.168803 ignition[1179]: parsing config with SHA512: 47d3eb8fdd50d1a83aa6c7bab2e980638fa94b2e5fcd854ee355445b6da7fcc39b6f3d568b8508428f4e54ef27a8ee6622ef4da54ce29321f045db321ac151dc Jan 24 00:31:08.174296 unknown[1179]: fetched base config from "system" Jan 24 00:31:08.175324 ignition[1179]: fetch: fetch complete Jan 24 00:31:08.174319 unknown[1179]: fetched base config from "system" Jan 24 00:31:08.175338 ignition[1179]: fetch: fetch passed Jan 24 00:31:08.174347 unknown[1179]: fetched user config from "aws" Jan 24 00:31:08.175404 ignition[1179]: Ignition finished successfully Jan 24 00:31:08.179453 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 24 00:31:08.184684 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:31:08.201071 ignition[1186]: Ignition 2.19.0 Jan 24 00:31:08.201084 ignition[1186]: Stage: kargs Jan 24 00:31:08.201594 ignition[1186]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:31:08.201608 ignition[1186]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:31:08.201725 ignition[1186]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:31:08.202681 ignition[1186]: PUT result: OK Jan 24 00:31:08.206000 ignition[1186]: kargs: kargs passed Jan 24 00:31:08.206073 ignition[1186]: Ignition finished successfully Jan 24 00:31:08.207982 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:31:08.212716 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:31:08.227734 ignition[1192]: Ignition 2.19.0 Jan 24 00:31:08.227748 ignition[1192]: Stage: disks Jan 24 00:31:08.228193 ignition[1192]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:31:08.228207 ignition[1192]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:31:08.228333 ignition[1192]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:31:08.229301 ignition[1192]: PUT result: OK Jan 24 00:31:08.232396 ignition[1192]: disks: disks passed Jan 24 00:31:08.232455 ignition[1192]: Ignition finished successfully Jan 24 00:31:08.234055 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:31:08.234612 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:31:08.235055 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:31:08.235536 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:31:08.236061 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:31:08.236628 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:31:08.241703 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:31:08.277973 systemd-fsck[1201]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 24 00:31:08.280815 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:31:08.284584 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:31:08.381485 kernel: EXT4-fs (nvme0n1p9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:31:08.381997 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:31:08.383039 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:31:08.394611 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:31:08.397573 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:31:08.398782 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 24 00:31:08.399598 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:31:08.399622 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:31:08.405553 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:31:08.411643 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:31:08.416487 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1220) Jan 24 00:31:08.419505 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:31:08.419553 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:31:08.421941 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 24 00:31:08.434497 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 24 00:31:08.436113 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:31:08.828326 initrd-setup-root[1244]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:31:08.858593 initrd-setup-root[1251]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:31:08.863359 initrd-setup-root[1258]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:31:08.867858 initrd-setup-root[1265]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:31:09.172840 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:31:09.182045 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:31:09.185625 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:31:09.211279 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:31:09.216952 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:31:09.247514 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:31:09.249372 ignition[1332]: INFO : Ignition 2.19.0 Jan 24 00:31:09.249372 ignition[1332]: INFO : Stage: mount Jan 24 00:31:09.249372 ignition[1332]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:31:09.249372 ignition[1332]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:31:09.249372 ignition[1332]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:31:09.252372 ignition[1332]: INFO : PUT result: OK Jan 24 00:31:09.252968 ignition[1332]: INFO : mount: mount passed Jan 24 00:31:09.253520 ignition[1332]: INFO : Ignition finished successfully Jan 24 00:31:09.254815 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:31:09.261336 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:31:09.272744 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:31:09.290504 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1347) Jan 24 00:31:09.297960 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:31:09.298025 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:31:09.298050 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 24 00:31:09.303511 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 24 00:31:09.305597 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:31:09.329575 ignition[1363]: INFO : Ignition 2.19.0 Jan 24 00:31:09.330272 ignition[1363]: INFO : Stage: files Jan 24 00:31:09.331178 ignition[1363]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:31:09.331753 ignition[1363]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:31:09.331753 ignition[1363]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:31:09.333320 ignition[1363]: INFO : PUT result: OK Jan 24 00:31:09.336389 ignition[1363]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:31:09.337308 ignition[1363]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:31:09.337308 ignition[1363]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:31:09.370660 systemd-networkd[1170]: eth0: Gained IPv6LL Jan 24 00:31:09.374626 ignition[1363]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:31:09.375506 ignition[1363]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:31:09.375506 ignition[1363]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:31:09.375300 unknown[1363]: wrote ssh authorized keys file for user: core Jan 24 00:31:09.377814 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 00:31:09.378540 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 24 00:31:09.444188 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 00:31:09.635796 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 00:31:09.635796 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:31:09.637534 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:31:09.637534 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:31:09.637534 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:31:09.637534 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:31:09.637534 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:31:09.637534 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:31:09.637534 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:31:09.637534 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:31:09.637534 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:31:09.637534 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 00:31:09.637534 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 00:31:09.637534 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 00:31:09.637534 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 24 00:31:10.144566 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 24 00:31:11.503098 ignition[1363]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 00:31:11.503098 ignition[1363]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 24 00:31:11.516975 ignition[1363]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:31:11.518427 ignition[1363]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:31:11.518427 ignition[1363]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 24 00:31:11.518427 ignition[1363]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:31:11.518427 ignition[1363]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:31:11.518427 ignition[1363]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:31:11.518427 ignition[1363]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:31:11.518427 ignition[1363]: INFO : files: files passed Jan 24 00:31:11.518427 ignition[1363]: INFO : Ignition finished successfully Jan 24 00:31:11.519820 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:31:11.529788 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:31:11.533066 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:31:11.536065 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:31:11.536907 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:31:11.558493 initrd-setup-root-after-ignition[1392]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:31:11.558493 initrd-setup-root-after-ignition[1392]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:31:11.561583 initrd-setup-root-after-ignition[1396]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:31:11.563756 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:31:11.564457 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:31:11.568661 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:31:11.615363 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:31:11.615514 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:31:11.616780 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:31:11.617931 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:31:11.618777 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:31:11.623799 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:31:11.638490 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:31:11.644685 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:31:11.657560 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:31:11.658261 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:31:11.659445 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:31:11.660331 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:31:11.660534 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:31:11.661749 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:31:11.662642 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:31:11.663599 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:31:11.664393 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:31:11.665202 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:31:11.666011 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:31:11.666949 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:31:11.667730 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:31:11.668906 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:31:11.669690 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:31:11.670404 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:31:11.670604 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:31:11.671823 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:31:11.672643 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:31:11.673327 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:31:11.673478 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:31:11.674166 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:31:11.674335 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:31:11.675867 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:31:11.676050 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:31:11.676789 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:31:11.676941 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:31:11.685842 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:31:11.686527 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:31:11.686726 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:31:11.692691 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:31:11.693293 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:31:11.693536 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:31:11.694398 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:31:11.696884 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:31:11.705338 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:31:11.705489 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:31:11.710620 ignition[1416]: INFO : Ignition 2.19.0 Jan 24 00:31:11.710620 ignition[1416]: INFO : Stage: umount Jan 24 00:31:11.713033 ignition[1416]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:31:11.713033 ignition[1416]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 24 00:31:11.713033 ignition[1416]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 24 00:31:11.715545 ignition[1416]: INFO : PUT result: OK Jan 24 00:31:11.718019 ignition[1416]: INFO : umount: umount passed Jan 24 00:31:11.718592 ignition[1416]: INFO : Ignition finished successfully Jan 24 00:31:11.721028 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:31:11.721851 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:31:11.723153 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:31:11.723213 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:31:11.724601 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:31:11.724657 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:31:11.725441 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 24 00:31:11.726083 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 24 00:31:11.728131 systemd[1]: Stopped target network.target - Network. Jan 24 00:31:11.728611 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:31:11.728669 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:31:11.729152 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:31:11.729619 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:31:11.733571 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:31:11.733922 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:31:11.734373 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:31:11.734692 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:31:11.734733 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:31:11.735144 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:31:11.735181 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:31:11.735489 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:31:11.735539 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:31:11.735826 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:31:11.735864 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:31:11.736618 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:31:11.737186 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:31:11.738766 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:31:11.739380 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:31:11.739461 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:31:11.739512 systemd-networkd[1170]: eth0: DHCPv6 lease lost Jan 24 00:31:11.741426 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:31:11.741575 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:31:11.744205 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:31:11.744277 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:31:11.745520 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:31:11.745568 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:31:11.752588 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:31:11.753032 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:31:11.753095 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:31:11.753598 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:31:11.754140 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:31:11.754234 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:31:11.761386 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:31:11.761816 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:31:11.762217 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:31:11.762259 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:31:11.763795 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:31:11.764194 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:31:11.768871 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:31:11.769509 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:31:11.770740 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:31:11.770963 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:31:11.771740 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:31:11.771776 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:31:11.773007 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:31:11.773059 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:31:11.774003 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:31:11.774061 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:31:11.775443 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:31:11.775510 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:31:11.782685 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:31:11.783313 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:31:11.783381 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:31:11.784589 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:31:11.784636 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:31:11.785379 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:31:11.785499 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:31:11.789905 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:31:11.790013 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:31:11.791746 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:31:11.802748 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:31:11.811477 systemd[1]: Switching root. Jan 24 00:31:11.851808 systemd-journald[179]: Journal stopped Jan 24 00:31:13.766413 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Jan 24 00:31:13.766528 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:31:13.766551 kernel: SELinux: policy capability open_perms=1 Jan 24 00:31:13.766575 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:31:13.766593 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:31:13.766612 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:31:13.766635 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:31:13.766659 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:31:13.766678 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:31:13.766696 kernel: audit: type=1403 audit(1769214672.315:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 00:31:13.766721 systemd[1]: Successfully loaded SELinux policy in 51.097ms. Jan 24 00:31:13.766750 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.532ms. Jan 24 00:31:13.766771 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:31:13.766792 systemd[1]: Detected virtualization amazon. Jan 24 00:31:13.766824 systemd[1]: Detected architecture x86-64. Jan 24 00:31:13.766844 systemd[1]: Detected first boot. Jan 24 00:31:13.766864 systemd[1]: Initializing machine ID from VM UUID. Jan 24 00:31:13.766883 zram_generator::config[1459]: No configuration found. Jan 24 00:31:13.766903 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:31:13.766924 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 00:31:13.766943 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 00:31:13.766967 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 00:31:13.766987 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:31:13.767010 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:31:13.767030 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:31:13.767050 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:31:13.767069 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:31:13.767088 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:31:13.767108 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:31:13.767127 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:31:13.767145 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:31:13.767167 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:31:13.767188 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:31:13.767207 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:31:13.767229 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:31:13.767252 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:31:13.767271 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:31:13.767290 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:31:13.767309 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 00:31:13.767330 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 00:31:13.767353 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 00:31:13.767372 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:31:13.767394 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:31:13.767414 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:31:13.767432 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:31:13.767451 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:31:13.767504 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:31:13.767530 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:31:13.767562 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:31:13.767582 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:31:13.767603 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:31:13.767625 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:31:13.767647 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:31:13.767669 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:31:13.767690 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:31:13.767712 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:31:13.767734 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:31:13.767759 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:31:13.767779 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:31:13.767802 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:31:13.767822 systemd[1]: Reached target machines.target - Containers. Jan 24 00:31:13.767843 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:31:13.767865 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:31:13.767887 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:31:13.767908 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:31:13.767932 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:31:13.767960 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:31:13.767981 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:31:13.768002 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:31:13.768024 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:31:13.768046 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:31:13.768068 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 00:31:13.768090 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 00:31:13.768109 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 00:31:13.768131 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 00:31:13.768150 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:31:13.768170 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:31:13.768189 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:31:13.768208 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:31:13.768227 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:31:13.768248 systemd[1]: verity-setup.service: Deactivated successfully. Jan 24 00:31:13.768266 systemd[1]: Stopped verity-setup.service. Jan 24 00:31:13.768284 kernel: loop: module loaded Jan 24 00:31:13.768309 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:31:13.768329 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:31:13.768349 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:31:13.768369 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:31:13.768390 kernel: fuse: init (API version 7.39) Jan 24 00:31:13.768407 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:31:13.769265 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:31:13.769294 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:31:13.769313 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:31:13.769335 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:31:13.769357 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:31:13.769378 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:31:13.769399 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:31:13.769424 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:31:13.769446 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:31:13.769513 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:31:13.769537 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:31:13.769557 kernel: ACPI: bus type drm_connector registered Jan 24 00:31:13.769575 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:31:13.769598 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:31:13.769650 systemd-journald[1541]: Collecting audit messages is disabled. Jan 24 00:31:13.769691 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:31:13.769713 systemd-journald[1541]: Journal started Jan 24 00:31:13.769758 systemd-journald[1541]: Runtime Journal (/run/log/journal/ec248ece9b5fec2b93ed7e2c8b0ce070) is 4.7M, max 38.2M, 33.4M free. Jan 24 00:31:13.769824 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:31:13.333523 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:31:13.386103 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 24 00:31:13.386542 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 00:31:13.774521 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:31:13.778230 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:31:13.778451 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:31:13.779959 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:31:13.802958 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:31:13.812577 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:31:13.824619 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:31:13.828572 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:31:13.828632 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:31:13.833403 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 24 00:31:13.840632 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:31:13.844698 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:31:13.847719 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:31:13.855667 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:31:13.861633 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:31:13.862418 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:31:13.864027 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:31:13.865272 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:31:13.869739 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:31:13.873065 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:31:13.877339 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:31:13.878788 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:31:13.879741 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:31:13.893691 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:31:13.895041 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:31:13.915879 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:31:13.928759 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 24 00:31:13.935687 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:31:13.936646 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:31:13.947382 systemd-journald[1541]: Time spent on flushing to /var/log/journal/ec248ece9b5fec2b93ed7e2c8b0ce070 is 117.794ms for 988 entries. Jan 24 00:31:13.947382 systemd-journald[1541]: System Journal (/var/log/journal/ec248ece9b5fec2b93ed7e2c8b0ce070) is 8.0M, max 195.6M, 187.6M free. Jan 24 00:31:14.075502 systemd-journald[1541]: Received client request to flush runtime journal. Jan 24 00:31:14.075610 kernel: loop0: detected capacity change from 0 to 142488 Jan 24 00:31:13.946690 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 24 00:31:13.986124 udevadm[1594]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 24 00:31:14.023943 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:31:14.034093 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:31:14.048260 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:31:14.087350 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:31:14.099661 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:31:14.103186 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 24 00:31:14.130874 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:31:14.137992 systemd-tmpfiles[1603]: ACLs are not supported, ignoring. Jan 24 00:31:14.138022 systemd-tmpfiles[1603]: ACLs are not supported, ignoring. Jan 24 00:31:14.146393 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:31:14.151495 kernel: loop1: detected capacity change from 0 to 229808 Jan 24 00:31:14.441576 kernel: loop2: detected capacity change from 0 to 61336 Jan 24 00:31:14.571942 kernel: loop3: detected capacity change from 0 to 140768 Jan 24 00:31:14.663091 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:31:14.676311 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:31:14.701228 kernel: loop4: detected capacity change from 0 to 142488 Jan 24 00:31:14.707544 systemd-udevd[1614]: Using default interface naming scheme 'v255'. Jan 24 00:31:14.729496 kernel: loop5: detected capacity change from 0 to 229808 Jan 24 00:31:14.754488 kernel: loop6: detected capacity change from 0 to 61336 Jan 24 00:31:14.769496 kernel: loop7: detected capacity change from 0 to 140768 Jan 24 00:31:14.781970 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:31:14.790730 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:31:14.793634 (sd-merge)[1616]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 24 00:31:14.794919 (sd-merge)[1616]: Merged extensions into '/usr'. Jan 24 00:31:14.815033 systemd[1]: Reloading requested from client PID 1587 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:31:14.815206 systemd[1]: Reloading... Jan 24 00:31:14.844261 (udev-worker)[1627]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:31:14.992490 zram_generator::config[1665]: No configuration found. Jan 24 00:31:15.049928 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 24 00:31:15.063542 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 24 00:31:15.070496 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Jan 24 00:31:15.086542 kernel: ACPI: button: Power Button [PWRF] Jan 24 00:31:15.092789 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jan 24 00:31:15.127645 kernel: ACPI: button: Sleep Button [SLPF] Jan 24 00:31:15.171495 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1631) Jan 24 00:31:15.177497 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:31:15.353127 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:31:15.456937 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 24 00:31:15.458004 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 00:31:15.458428 systemd[1]: Reloading finished in 640 ms. Jan 24 00:31:15.490662 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:31:15.515428 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 24 00:31:15.516147 ldconfig[1582]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:31:15.519882 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:31:15.534724 systemd[1]: Starting ensure-sysext.service... Jan 24 00:31:15.538671 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 24 00:31:15.555128 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:31:15.559688 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:31:15.569685 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:31:15.576733 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:31:15.586553 lvm[1805]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:31:15.591567 systemd[1]: Reloading requested from client PID 1804 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:31:15.591592 systemd[1]: Reloading... Jan 24 00:31:15.676764 systemd-tmpfiles[1807]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:31:15.680101 systemd-tmpfiles[1807]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 24 00:31:15.687509 systemd-tmpfiles[1807]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 24 00:31:15.689132 systemd-tmpfiles[1807]: ACLs are not supported, ignoring. Jan 24 00:31:15.690557 systemd-tmpfiles[1807]: ACLs are not supported, ignoring. Jan 24 00:31:15.705841 systemd-tmpfiles[1807]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:31:15.705860 systemd-tmpfiles[1807]: Skipping /boot Jan 24 00:31:15.737446 systemd-tmpfiles[1807]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:31:15.739357 systemd-tmpfiles[1807]: Skipping /boot Jan 24 00:31:15.757535 zram_generator::config[1844]: No configuration found. Jan 24 00:31:15.818002 systemd-networkd[1622]: lo: Link UP Jan 24 00:31:15.818017 systemd-networkd[1622]: lo: Gained carrier Jan 24 00:31:15.820277 systemd-networkd[1622]: Enumeration completed Jan 24 00:31:15.820790 systemd-networkd[1622]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:31:15.820795 systemd-networkd[1622]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:31:15.824324 systemd-networkd[1622]: eth0: Link UP Jan 24 00:31:15.824728 systemd-networkd[1622]: eth0: Gained carrier Jan 24 00:31:15.824834 systemd-networkd[1622]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:31:15.834664 systemd-networkd[1622]: eth0: DHCPv4 address 172.31.16.136/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 24 00:31:15.920563 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:31:15.997543 systemd[1]: Reloading finished in 405 ms. Jan 24 00:31:16.013291 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:31:16.013956 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:31:16.017909 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 24 00:31:16.018630 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:31:16.019502 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:31:16.020287 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:31:16.029015 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:31:16.037836 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:31:16.042854 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:31:16.046426 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 24 00:31:16.059847 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:31:16.069248 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:31:16.081666 lvm[1907]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:31:16.080072 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:31:16.085835 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:31:16.093417 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:31:16.093735 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:31:16.101363 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:31:16.113342 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:31:16.123343 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:31:16.124087 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:31:16.124261 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:31:16.129045 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:31:16.129332 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:31:16.130139 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:31:16.130297 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:31:16.137742 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:31:16.138103 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:31:16.153127 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:31:16.153954 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:31:16.154238 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:31:16.155637 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:31:16.162583 systemd[1]: Finished ensure-sysext.service. Jan 24 00:31:16.173322 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 24 00:31:16.183245 augenrules[1932]: No rules Jan 24 00:31:16.183045 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:31:16.183274 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:31:16.184332 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:31:16.184756 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:31:16.187602 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:31:16.195883 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:31:16.197270 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:31:16.197459 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:31:16.198734 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:31:16.198927 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:31:16.205865 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:31:16.207434 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:31:16.208099 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:31:16.215798 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:31:16.240789 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:31:16.246202 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:31:16.247391 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:31:16.257607 systemd-resolved[1918]: Positive Trust Anchors: Jan 24 00:31:16.257627 systemd-resolved[1918]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:31:16.257664 systemd-resolved[1918]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:31:16.275278 systemd-resolved[1918]: Defaulting to hostname 'linux'. Jan 24 00:31:16.277181 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:31:16.277717 systemd[1]: Reached target network.target - Network. Jan 24 00:31:16.278085 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:31:16.278411 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:31:16.278950 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:31:16.279806 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:31:16.280316 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:31:16.280757 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:31:16.281070 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:31:16.281378 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:31:16.281416 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:31:16.281720 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:31:16.283348 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:31:16.285206 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:31:16.295822 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:31:16.296953 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:31:16.297449 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:31:16.297826 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:31:16.298209 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:31:16.298242 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:31:16.299558 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:31:16.302638 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 24 00:31:16.307641 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:31:16.309577 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:31:16.311755 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:31:16.313544 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:31:16.314651 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:31:16.316369 systemd[1]: Started ntpd.service - Network Time Service. Jan 24 00:31:16.319571 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 00:31:16.324591 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 24 00:31:16.326619 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:31:16.340176 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:31:16.345635 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:31:16.346365 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:31:16.349803 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:31:16.359630 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:31:16.364885 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:31:16.392406 jq[1952]: false Jan 24 00:31:16.392739 extend-filesystems[1953]: Found loop4 Jan 24 00:31:16.394785 extend-filesystems[1953]: Found loop5 Jan 24 00:31:16.394785 extend-filesystems[1953]: Found loop6 Jan 24 00:31:16.394785 extend-filesystems[1953]: Found loop7 Jan 24 00:31:16.394785 extend-filesystems[1953]: Found nvme0n1 Jan 24 00:31:16.394785 extend-filesystems[1953]: Found nvme0n1p1 Jan 24 00:31:16.394785 extend-filesystems[1953]: Found nvme0n1p2 Jan 24 00:31:16.394785 extend-filesystems[1953]: Found nvme0n1p3 Jan 24 00:31:16.394785 extend-filesystems[1953]: Found usr Jan 24 00:31:16.394785 extend-filesystems[1953]: Found nvme0n1p4 Jan 24 00:31:16.394785 extend-filesystems[1953]: Found nvme0n1p6 Jan 24 00:31:16.394785 extend-filesystems[1953]: Found nvme0n1p7 Jan 24 00:31:16.394785 extend-filesystems[1953]: Found nvme0n1p9 Jan 24 00:31:16.394785 extend-filesystems[1953]: Checking size of /dev/nvme0n1p9 Jan 24 00:31:16.403947 dbus-daemon[1951]: [system] SELinux support is enabled Jan 24 00:31:16.404171 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:31:16.407606 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:31:16.407651 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:31:16.408632 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:31:16.408661 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:31:16.412029 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:31:16.412206 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:31:16.422731 dbus-daemon[1951]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1622 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 24 00:31:16.425845 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:31:16.426523 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:31:16.427847 jq[1964]: true Jan 24 00:31:16.430722 dbus-daemon[1951]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 24 00:31:16.447039 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 24 00:31:16.466499 jq[1981]: true Jan 24 00:31:16.482258 update_engine[1961]: I20260124 00:31:16.481895 1961 main.cc:92] Flatcar Update Engine starting Jan 24 00:31:16.485780 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:31:16.487636 ntpd[1955]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 22:00:38 UTC 2026 (1): Starting Jan 24 00:31:16.487664 ntpd[1955]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 24 00:31:16.488927 ntpd[1955]: 24 Jan 00:31:16 ntpd[1955]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 22:00:38 UTC 2026 (1): Starting Jan 24 00:31:16.488927 ntpd[1955]: 24 Jan 00:31:16 ntpd[1955]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 24 00:31:16.488927 ntpd[1955]: 24 Jan 00:31:16 ntpd[1955]: ---------------------------------------------------- Jan 24 00:31:16.488927 ntpd[1955]: 24 Jan 00:31:16 ntpd[1955]: ntp-4 is maintained by Network Time Foundation, Jan 24 00:31:16.488927 ntpd[1955]: 24 Jan 00:31:16 ntpd[1955]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 24 00:31:16.488927 ntpd[1955]: 24 Jan 00:31:16 ntpd[1955]: corporation. Support and training for ntp-4 are Jan 24 00:31:16.488927 ntpd[1955]: 24 Jan 00:31:16 ntpd[1955]: available at https://www.nwtime.org/support Jan 24 00:31:16.488927 ntpd[1955]: 24 Jan 00:31:16 ntpd[1955]: ---------------------------------------------------- Jan 24 00:31:16.487672 ntpd[1955]: ---------------------------------------------------- Jan 24 00:31:16.494867 ntpd[1955]: 24 Jan 00:31:16 ntpd[1955]: proto: precision = 0.054 usec (-24) Jan 24 00:31:16.487679 ntpd[1955]: ntp-4 is maintained by Network Time Foundation, Jan 24 00:31:16.487686 ntpd[1955]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 24 00:31:16.487692 ntpd[1955]: corporation. Support and training for ntp-4 are Jan 24 00:31:16.487699 ntpd[1955]: available at https://www.nwtime.org/support Jan 24 00:31:16.487707 ntpd[1955]: ---------------------------------------------------- Jan 24 00:31:16.491376 ntpd[1955]: proto: precision = 0.054 usec (-24) Jan 24 00:31:16.498835 update_engine[1961]: I20260124 00:31:16.498644 1961 update_check_scheduler.cc:74] Next update check in 2m22s Jan 24 00:31:16.500161 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:31:16.502158 ntpd[1955]: basedate set to 2026-01-11 Jan 24 00:31:16.504562 ntpd[1955]: 24 Jan 00:31:16 ntpd[1955]: basedate set to 2026-01-11 Jan 24 00:31:16.504562 ntpd[1955]: 24 Jan 00:31:16 ntpd[1955]: gps base set to 2026-01-11 (week 2401) Jan 24 00:31:16.502179 ntpd[1955]: gps base set to 2026-01-11 (week 2401) Jan 24 00:31:16.512229 extend-filesystems[1953]: Resized partition /dev/nvme0n1p9 Jan 24 00:31:16.513143 ntpd[1955]: Listen and drop on 0 v6wildcard [::]:123 Jan 24 00:31:16.513196 ntpd[1955]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 24 00:31:16.513256 ntpd[1955]: 24 Jan 00:31:16 ntpd[1955]: Listen and drop on 0 v6wildcard [::]:123 Jan 24 00:31:16.513256 ntpd[1955]: 24 Jan 00:31:16 ntpd[1955]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 24 00:31:16.513347 ntpd[1955]: Listen normally on 2 lo 127.0.0.1:123 Jan 24 00:31:16.515537 ntpd[1955]: 24 Jan 00:31:16 ntpd[1955]: Listen normally on 2 lo 127.0.0.1:123 Jan 24 00:31:16.515537 ntpd[1955]: 24 Jan 00:31:16 ntpd[1955]: Listen normally on 3 eth0 172.31.16.136:123 Jan 24 00:31:16.515537 ntpd[1955]: 24 Jan 00:31:16 ntpd[1955]: Listen normally on 4 lo [::1]:123 Jan 24 00:31:16.515537 ntpd[1955]: 24 Jan 00:31:16 ntpd[1955]: bind(21) AF_INET6 fe80::496:fbff:feee:257f%2#123 flags 0x11 failed: Cannot assign requested address Jan 24 00:31:16.515537 ntpd[1955]: 24 Jan 00:31:16 ntpd[1955]: unable to create socket on eth0 (5) for fe80::496:fbff:feee:257f%2#123 Jan 24 00:31:16.515537 ntpd[1955]: 24 Jan 00:31:16 ntpd[1955]: failed to init interface for address fe80::496:fbff:feee:257f%2 Jan 24 00:31:16.515537 ntpd[1955]: 24 Jan 00:31:16 ntpd[1955]: Listening on routing socket on fd #21 for interface updates Jan 24 00:31:16.513378 ntpd[1955]: Listen normally on 3 eth0 172.31.16.136:123 Jan 24 00:31:16.513409 ntpd[1955]: Listen normally on 4 lo [::1]:123 Jan 24 00:31:16.513441 ntpd[1955]: bind(21) AF_INET6 fe80::496:fbff:feee:257f%2#123 flags 0x11 failed: Cannot assign requested address Jan 24 00:31:16.513459 ntpd[1955]: unable to create socket on eth0 (5) for fe80::496:fbff:feee:257f%2#123 Jan 24 00:31:16.513494 ntpd[1955]: failed to init interface for address fe80::496:fbff:feee:257f%2 Jan 24 00:31:16.513519 ntpd[1955]: Listening on routing socket on fd #21 for interface updates Jan 24 00:31:16.516528 extend-filesystems[2006]: resize2fs 1.47.1 (20-May-2024) Jan 24 00:31:16.526394 ntpd[1955]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 24 00:31:16.532600 ntpd[1955]: 24 Jan 00:31:16 ntpd[1955]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 24 00:31:16.532600 ntpd[1955]: 24 Jan 00:31:16 ntpd[1955]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 24 00:31:16.526430 ntpd[1955]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 24 00:31:16.535517 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 24 00:31:16.537905 tar[1973]: linux-amd64/LICENSE Jan 24 00:31:16.538131 tar[1973]: linux-amd64/helm Jan 24 00:31:16.545800 (ntainerd)[1990]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 24 00:31:16.553800 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 24 00:31:16.557937 coreos-metadata[1950]: Jan 24 00:31:16.557 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 24 00:31:16.563620 coreos-metadata[1950]: Jan 24 00:31:16.563 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 24 00:31:16.564717 coreos-metadata[1950]: Jan 24 00:31:16.564 INFO Fetch successful Jan 24 00:31:16.564717 coreos-metadata[1950]: Jan 24 00:31:16.564 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 24 00:31:16.566997 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:31:16.568734 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:31:16.570249 coreos-metadata[1950]: Jan 24 00:31:16.570 INFO Fetch successful Jan 24 00:31:16.570249 coreos-metadata[1950]: Jan 24 00:31:16.570 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 24 00:31:16.578706 coreos-metadata[1950]: Jan 24 00:31:16.578 INFO Fetch successful Jan 24 00:31:16.578706 coreos-metadata[1950]: Jan 24 00:31:16.578 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 24 00:31:16.579524 coreos-metadata[1950]: Jan 24 00:31:16.579 INFO Fetch successful Jan 24 00:31:16.579524 coreos-metadata[1950]: Jan 24 00:31:16.579 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 24 00:31:16.580179 coreos-metadata[1950]: Jan 24 00:31:16.580 INFO Fetch failed with 404: resource not found Jan 24 00:31:16.580179 coreos-metadata[1950]: Jan 24 00:31:16.580 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 24 00:31:16.580890 coreos-metadata[1950]: Jan 24 00:31:16.580 INFO Fetch successful Jan 24 00:31:16.580890 coreos-metadata[1950]: Jan 24 00:31:16.580 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 24 00:31:16.581732 coreos-metadata[1950]: Jan 24 00:31:16.581 INFO Fetch successful Jan 24 00:31:16.581732 coreos-metadata[1950]: Jan 24 00:31:16.581 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 24 00:31:16.582180 coreos-metadata[1950]: Jan 24 00:31:16.582 INFO Fetch successful Jan 24 00:31:16.582280 coreos-metadata[1950]: Jan 24 00:31:16.582 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 24 00:31:16.586041 coreos-metadata[1950]: Jan 24 00:31:16.585 INFO Fetch successful Jan 24 00:31:16.586041 coreos-metadata[1950]: Jan 24 00:31:16.586 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 24 00:31:16.588152 coreos-metadata[1950]: Jan 24 00:31:16.587 INFO Fetch successful Jan 24 00:31:16.643680 dbus-daemon[1951]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 24 00:31:16.644961 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 24 00:31:16.645704 dbus-daemon[1951]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.4' (uid=0 pid=1982 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 24 00:31:16.647848 systemd-logind[1960]: Watching system buttons on /dev/input/event2 (Power Button) Jan 24 00:31:16.647870 systemd-logind[1960]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 24 00:31:16.647888 systemd-logind[1960]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:31:16.652875 systemd[1]: Starting polkit.service - Authorization Manager... Jan 24 00:31:16.656218 systemd-logind[1960]: New seat seat0. Jan 24 00:31:16.661508 bash[2028]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:31:16.661746 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 24 00:31:16.662974 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:31:16.666029 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1619) Jan 24 00:31:16.666651 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:31:16.671134 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 24 00:31:16.671925 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:31:16.680995 systemd[1]: Starting sshkeys.service... Jan 24 00:31:16.711420 extend-filesystems[2006]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 24 00:31:16.711420 extend-filesystems[2006]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 24 00:31:16.711420 extend-filesystems[2006]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 24 00:31:16.706917 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:31:16.716825 extend-filesystems[1953]: Resized filesystem in /dev/nvme0n1p9 Jan 24 00:31:16.707095 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:31:16.729977 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 24 00:31:16.731233 polkitd[2031]: Started polkitd version 121 Jan 24 00:31:16.738847 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 24 00:31:16.753889 polkitd[2031]: Loading rules from directory /etc/polkit-1/rules.d Jan 24 00:31:16.753961 polkitd[2031]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 24 00:31:16.764818 polkitd[2031]: Finished loading, compiling and executing 2 rules Jan 24 00:31:16.766548 dbus-daemon[1951]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 24 00:31:16.766702 systemd[1]: Started polkit.service - Authorization Manager. Jan 24 00:31:16.773288 polkitd[2031]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 24 00:31:16.838598 systemd-resolved[1918]: System hostname changed to 'ip-172-31-16-136'. Jan 24 00:31:16.838710 systemd-hostnamed[1982]: Hostname set to (transient) Jan 24 00:31:16.890297 sshd_keygen[1971]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:31:16.938260 locksmithd[1993]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:31:16.939233 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:31:16.952818 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:31:16.954422 coreos-metadata[2047]: Jan 24 00:31:16.954 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 24 00:31:16.956710 coreos-metadata[2047]: Jan 24 00:31:16.955 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 24 00:31:16.957206 coreos-metadata[2047]: Jan 24 00:31:16.957 INFO Fetch successful Jan 24 00:31:16.957206 coreos-metadata[2047]: Jan 24 00:31:16.957 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 24 00:31:16.961403 coreos-metadata[2047]: Jan 24 00:31:16.961 INFO Fetch successful Jan 24 00:31:16.970024 unknown[2047]: wrote ssh authorized keys file for user: core Jan 24 00:31:17.005486 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:31:17.005660 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:31:17.017604 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:31:17.022082 update-ssh-keys[2157]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:31:17.025823 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 24 00:31:17.029819 systemd[1]: Finished sshkeys.service. Jan 24 00:31:17.053246 systemd-networkd[1622]: eth0: Gained IPv6LL Jan 24 00:31:17.054542 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:31:17.067583 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:31:17.076824 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:31:17.077900 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:31:17.079712 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:31:17.081850 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:31:17.094828 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 24 00:31:17.098046 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:31:17.104777 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:31:17.124494 containerd[1990]: time="2026-01-24T00:31:17.123395466Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 24 00:31:17.164005 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:31:17.172276 containerd[1990]: time="2026-01-24T00:31:17.172232163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:31:17.173873 containerd[1990]: time="2026-01-24T00:31:17.173830776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:31:17.173873 containerd[1990]: time="2026-01-24T00:31:17.173871784Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 24 00:31:17.173974 containerd[1990]: time="2026-01-24T00:31:17.173890891Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 24 00:31:17.174043 containerd[1990]: time="2026-01-24T00:31:17.174026209Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 24 00:31:17.174068 containerd[1990]: time="2026-01-24T00:31:17.174047328Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 24 00:31:17.174113 containerd[1990]: time="2026-01-24T00:31:17.174097876Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:31:17.174136 containerd[1990]: time="2026-01-24T00:31:17.174113924Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:31:17.174299 containerd[1990]: time="2026-01-24T00:31:17.174279049Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:31:17.174329 containerd[1990]: time="2026-01-24T00:31:17.174299294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 24 00:31:17.174329 containerd[1990]: time="2026-01-24T00:31:17.174311634Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:31:17.174329 containerd[1990]: time="2026-01-24T00:31:17.174322525Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 24 00:31:17.174395 containerd[1990]: time="2026-01-24T00:31:17.174385201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:31:17.174552 amazon-ssm-agent[2167]: Initializing new seelog logger Jan 24 00:31:17.174761 containerd[1990]: time="2026-01-24T00:31:17.174602145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:31:17.174761 containerd[1990]: time="2026-01-24T00:31:17.174717013Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:31:17.174761 containerd[1990]: time="2026-01-24T00:31:17.174731454Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 24 00:31:17.174851 containerd[1990]: time="2026-01-24T00:31:17.174803897Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 24 00:31:17.174851 containerd[1990]: time="2026-01-24T00:31:17.174842806Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:31:17.175294 amazon-ssm-agent[2167]: New Seelog Logger Creation Complete Jan 24 00:31:17.175294 amazon-ssm-agent[2167]: 2026/01/24 00:31:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:31:17.175294 amazon-ssm-agent[2167]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:31:17.175609 amazon-ssm-agent[2167]: 2026/01/24 00:31:17 processing appconfig overrides Jan 24 00:31:17.175950 amazon-ssm-agent[2167]: 2026/01/24 00:31:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:31:17.175950 amazon-ssm-agent[2167]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:31:17.176088 amazon-ssm-agent[2167]: 2026/01/24 00:31:17 processing appconfig overrides Jan 24 00:31:17.176273 amazon-ssm-agent[2167]: 2026/01/24 00:31:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:31:17.176273 amazon-ssm-agent[2167]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:31:17.176497 amazon-ssm-agent[2167]: 2026/01/24 00:31:17 processing appconfig overrides Jan 24 00:31:17.176756 amazon-ssm-agent[2167]: 2026-01-24 00:31:17 INFO Proxy environment variables: Jan 24 00:31:17.179688 amazon-ssm-agent[2167]: 2026/01/24 00:31:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:31:17.180276 amazon-ssm-agent[2167]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 24 00:31:17.180276 amazon-ssm-agent[2167]: 2026/01/24 00:31:17 processing appconfig overrides Jan 24 00:31:17.182211 containerd[1990]: time="2026-01-24T00:31:17.182173652Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 24 00:31:17.182318 containerd[1990]: time="2026-01-24T00:31:17.182285589Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 24 00:31:17.182344 containerd[1990]: time="2026-01-24T00:31:17.182325971Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 24 00:31:17.182367 containerd[1990]: time="2026-01-24T00:31:17.182342376Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 24 00:31:17.182367 containerd[1990]: time="2026-01-24T00:31:17.182356594Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 24 00:31:17.182584 containerd[1990]: time="2026-01-24T00:31:17.182567568Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 24 00:31:17.182941 containerd[1990]: time="2026-01-24T00:31:17.182919239Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 24 00:31:17.184434 containerd[1990]: time="2026-01-24T00:31:17.184397130Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 24 00:31:17.184499 containerd[1990]: time="2026-01-24T00:31:17.184438851Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 24 00:31:17.184499 containerd[1990]: time="2026-01-24T00:31:17.184460319Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 24 00:31:17.184545 containerd[1990]: time="2026-01-24T00:31:17.184498178Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 24 00:31:17.184545 containerd[1990]: time="2026-01-24T00:31:17.184512543Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 24 00:31:17.184545 containerd[1990]: time="2026-01-24T00:31:17.184525253Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 24 00:31:17.184617 containerd[1990]: time="2026-01-24T00:31:17.184555889Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 24 00:31:17.184617 containerd[1990]: time="2026-01-24T00:31:17.184571380Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 24 00:31:17.184617 containerd[1990]: time="2026-01-24T00:31:17.184584460Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 24 00:31:17.184617 containerd[1990]: time="2026-01-24T00:31:17.184596243Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 24 00:31:17.184617 containerd[1990]: time="2026-01-24T00:31:17.184609620Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 24 00:31:17.184717 containerd[1990]: time="2026-01-24T00:31:17.184641602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 24 00:31:17.184717 containerd[1990]: time="2026-01-24T00:31:17.184655653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 24 00:31:17.184717 containerd[1990]: time="2026-01-24T00:31:17.184667328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 24 00:31:17.184717 containerd[1990]: time="2026-01-24T00:31:17.184679844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 24 00:31:17.184805 containerd[1990]: time="2026-01-24T00:31:17.184719379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 24 00:31:17.184805 containerd[1990]: time="2026-01-24T00:31:17.184733813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 24 00:31:17.184805 containerd[1990]: time="2026-01-24T00:31:17.184746851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 24 00:31:17.184805 containerd[1990]: time="2026-01-24T00:31:17.184760138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 24 00:31:17.184805 containerd[1990]: time="2026-01-24T00:31:17.184789757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 24 00:31:17.184904 containerd[1990]: time="2026-01-24T00:31:17.184807625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 24 00:31:17.184904 containerd[1990]: time="2026-01-24T00:31:17.184820118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 24 00:31:17.184904 containerd[1990]: time="2026-01-24T00:31:17.184834584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 24 00:31:17.184904 containerd[1990]: time="2026-01-24T00:31:17.184846436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 24 00:31:17.184904 containerd[1990]: time="2026-01-24T00:31:17.184874436Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 24 00:31:17.184904 containerd[1990]: time="2026-01-24T00:31:17.184899123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 24 00:31:17.185035 containerd[1990]: time="2026-01-24T00:31:17.184911377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 24 00:31:17.185035 containerd[1990]: time="2026-01-24T00:31:17.184921701Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 24 00:31:17.185077 containerd[1990]: time="2026-01-24T00:31:17.184996437Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 24 00:31:17.185101 containerd[1990]: time="2026-01-24T00:31:17.185072483Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 24 00:31:17.185101 containerd[1990]: time="2026-01-24T00:31:17.185084745Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 24 00:31:17.185101 containerd[1990]: time="2026-01-24T00:31:17.185096266Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 24 00:31:17.185168 containerd[1990]: time="2026-01-24T00:31:17.185105130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 24 00:31:17.185168 containerd[1990]: time="2026-01-24T00:31:17.185117345Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 24 00:31:17.185168 containerd[1990]: time="2026-01-24T00:31:17.185139720Z" level=info msg="NRI interface is disabled by configuration." Jan 24 00:31:17.185168 containerd[1990]: time="2026-01-24T00:31:17.185149642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 24 00:31:17.186572 containerd[1990]: time="2026-01-24T00:31:17.185718591Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 24 00:31:17.186572 containerd[1990]: time="2026-01-24T00:31:17.185834884Z" level=info msg="Connect containerd service" Jan 24 00:31:17.186572 containerd[1990]: time="2026-01-24T00:31:17.185886881Z" level=info msg="using legacy CRI server" Jan 24 00:31:17.186572 containerd[1990]: time="2026-01-24T00:31:17.185896231Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:31:17.186572 containerd[1990]: time="2026-01-24T00:31:17.186153118Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 24 00:31:17.194746 containerd[1990]: time="2026-01-24T00:31:17.194496045Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:31:17.195902 containerd[1990]: time="2026-01-24T00:31:17.195601959Z" level=info msg="Start subscribing containerd event" Jan 24 00:31:17.195902 containerd[1990]: time="2026-01-24T00:31:17.195839092Z" level=info msg="Start recovering state" Jan 24 00:31:17.195989 containerd[1990]: time="2026-01-24T00:31:17.195931814Z" level=info msg="Start event monitor" Jan 24 00:31:17.195989 containerd[1990]: time="2026-01-24T00:31:17.195953013Z" level=info msg="Start snapshots syncer" Jan 24 00:31:17.195989 containerd[1990]: time="2026-01-24T00:31:17.195962212Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:31:17.195989 containerd[1990]: time="2026-01-24T00:31:17.195970403Z" level=info msg="Start streaming server" Jan 24 00:31:17.196312 containerd[1990]: time="2026-01-24T00:31:17.196146030Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:31:17.196379 containerd[1990]: time="2026-01-24T00:31:17.196254981Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:31:17.196945 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:31:17.198629 containerd[1990]: time="2026-01-24T00:31:17.198264928Z" level=info msg="containerd successfully booted in 0.075804s" Jan 24 00:31:17.276892 amazon-ssm-agent[2167]: 2026-01-24 00:31:17 INFO https_proxy: Jan 24 00:31:17.376731 amazon-ssm-agent[2167]: 2026-01-24 00:31:17 INFO http_proxy: Jan 24 00:31:17.447855 tar[1973]: linux-amd64/README.md Jan 24 00:31:17.459582 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 00:31:17.475243 amazon-ssm-agent[2167]: 2026-01-24 00:31:17 INFO no_proxy: Jan 24 00:31:17.573511 amazon-ssm-agent[2167]: 2026-01-24 00:31:17 INFO Checking if agent identity type OnPrem can be assumed Jan 24 00:31:17.618547 amazon-ssm-agent[2167]: 2026-01-24 00:31:17 INFO Checking if agent identity type EC2 can be assumed Jan 24 00:31:17.618547 amazon-ssm-agent[2167]: 2026-01-24 00:31:17 INFO Agent will take identity from EC2 Jan 24 00:31:17.618547 amazon-ssm-agent[2167]: 2026-01-24 00:31:17 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 24 00:31:17.618547 amazon-ssm-agent[2167]: 2026-01-24 00:31:17 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 24 00:31:17.618547 amazon-ssm-agent[2167]: 2026-01-24 00:31:17 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 24 00:31:17.618547 amazon-ssm-agent[2167]: 2026-01-24 00:31:17 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 24 00:31:17.618985 amazon-ssm-agent[2167]: 2026-01-24 00:31:17 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 24 00:31:17.618985 amazon-ssm-agent[2167]: 2026-01-24 00:31:17 INFO [amazon-ssm-agent] Starting Core Agent Jan 24 00:31:17.618985 amazon-ssm-agent[2167]: 2026-01-24 00:31:17 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 24 00:31:17.618985 amazon-ssm-agent[2167]: 2026-01-24 00:31:17 INFO [Registrar] Starting registrar module Jan 24 00:31:17.618985 amazon-ssm-agent[2167]: 2026-01-24 00:31:17 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 24 00:31:17.618985 amazon-ssm-agent[2167]: 2026-01-24 00:31:17 INFO [EC2Identity] EC2 registration was successful. Jan 24 00:31:17.618985 amazon-ssm-agent[2167]: 2026-01-24 00:31:17 INFO [CredentialRefresher] credentialRefresher has started Jan 24 00:31:17.618985 amazon-ssm-agent[2167]: 2026-01-24 00:31:17 INFO [CredentialRefresher] Starting credentials refresher loop Jan 24 00:31:17.618985 amazon-ssm-agent[2167]: 2026-01-24 00:31:17 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 24 00:31:17.672622 amazon-ssm-agent[2167]: 2026-01-24 00:31:17 INFO [CredentialRefresher] Next credential rotation will be in 30.01666058725 minutes Jan 24 00:31:18.632222 amazon-ssm-agent[2167]: 2026-01-24 00:31:18 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 24 00:31:18.733525 amazon-ssm-agent[2167]: 2026-01-24 00:31:18 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2193) started Jan 24 00:31:18.834218 amazon-ssm-agent[2167]: 2026-01-24 00:31:18 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 24 00:31:19.404018 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:31:19.410760 systemd[1]: Started sshd@0-172.31.16.136:22-4.153.228.146:56530.service - OpenSSH per-connection server daemon (4.153.228.146:56530). Jan 24 00:31:19.488214 ntpd[1955]: Listen normally on 6 eth0 [fe80::496:fbff:feee:257f%2]:123 Jan 24 00:31:19.488677 ntpd[1955]: 24 Jan 00:31:19 ntpd[1955]: Listen normally on 6 eth0 [fe80::496:fbff:feee:257f%2]:123 Jan 24 00:31:19.949774 sshd[2205]: Accepted publickey for core from 4.153.228.146 port 56530 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:31:19.951873 sshd[2205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:19.961631 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:31:19.969776 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:31:19.972806 systemd-logind[1960]: New session 1 of user core. Jan 24 00:31:19.982414 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:31:19.990714 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:31:19.994896 (systemd)[2209]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 24 00:31:20.132069 systemd[2209]: Queued start job for default target default.target. Jan 24 00:31:20.138871 systemd[2209]: Created slice app.slice - User Application Slice. Jan 24 00:31:20.138932 systemd[2209]: Reached target paths.target - Paths. Jan 24 00:31:20.138953 systemd[2209]: Reached target timers.target - Timers. Jan 24 00:31:20.140620 systemd[2209]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:31:20.161645 systemd[2209]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:31:20.161815 systemd[2209]: Reached target sockets.target - Sockets. Jan 24 00:31:20.161839 systemd[2209]: Reached target basic.target - Basic System. Jan 24 00:31:20.161899 systemd[2209]: Reached target default.target - Main User Target. Jan 24 00:31:20.161940 systemd[2209]: Startup finished in 159ms. Jan 24 00:31:20.162023 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:31:20.168653 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:31:20.463499 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:31:20.464378 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:31:20.465284 systemd[1]: Startup finished in 592ms (kernel) + 7.608s (initrd) + 8.200s (userspace) = 16.400s. Jan 24 00:31:20.469706 (kubelet)[2224]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:31:20.553246 systemd[1]: Started sshd@1-172.31.16.136:22-4.153.228.146:56546.service - OpenSSH per-connection server daemon (4.153.228.146:56546). Jan 24 00:31:21.076335 sshd[2230]: Accepted publickey for core from 4.153.228.146 port 56546 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:31:21.077761 sshd[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:21.082289 systemd-logind[1960]: New session 2 of user core. Jan 24 00:31:21.086649 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 24 00:31:21.452712 sshd[2230]: pam_unix(sshd:session): session closed for user core Jan 24 00:31:21.456912 systemd[1]: sshd@1-172.31.16.136:22-4.153.228.146:56546.service: Deactivated successfully. Jan 24 00:31:21.459251 systemd[1]: session-2.scope: Deactivated successfully. Jan 24 00:31:21.460896 systemd-logind[1960]: Session 2 logged out. Waiting for processes to exit. Jan 24 00:31:21.462014 systemd-logind[1960]: Removed session 2. Jan 24 00:31:21.547850 systemd[1]: Started sshd@2-172.31.16.136:22-4.153.228.146:56558.service - OpenSSH per-connection server daemon (4.153.228.146:56558). Jan 24 00:31:22.073511 sshd[2242]: Accepted publickey for core from 4.153.228.146 port 56558 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:31:22.074383 sshd[2242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:22.079136 systemd-logind[1960]: New session 3 of user core. Jan 24 00:31:22.084725 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:31:22.448768 sshd[2242]: pam_unix(sshd:session): session closed for user core Jan 24 00:31:22.451484 systemd[1]: sshd@2-172.31.16.136:22-4.153.228.146:56558.service: Deactivated successfully. Jan 24 00:31:22.453117 systemd[1]: session-3.scope: Deactivated successfully. Jan 24 00:31:22.454446 systemd-logind[1960]: Session 3 logged out. Waiting for processes to exit. Jan 24 00:31:22.456033 systemd-logind[1960]: Removed session 3. Jan 24 00:31:22.542522 systemd[1]: Started sshd@3-172.31.16.136:22-4.153.228.146:56562.service - OpenSSH per-connection server daemon (4.153.228.146:56562). Jan 24 00:31:22.637773 kubelet[2224]: E0124 00:31:22.637698 2224 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:31:22.639989 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:31:22.640120 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:31:22.640359 systemd[1]: kubelet.service: Consumed 1.061s CPU time. Jan 24 00:31:23.071689 sshd[2249]: Accepted publickey for core from 4.153.228.146 port 56562 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:31:23.073056 sshd[2249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:23.077559 systemd-logind[1960]: New session 4 of user core. Jan 24 00:31:23.084733 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:31:23.453615 sshd[2249]: pam_unix(sshd:session): session closed for user core Jan 24 00:31:23.456265 systemd[1]: sshd@3-172.31.16.136:22-4.153.228.146:56562.service: Deactivated successfully. Jan 24 00:31:23.458140 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:31:23.459384 systemd-logind[1960]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:31:23.460671 systemd-logind[1960]: Removed session 4. Jan 24 00:31:24.046971 systemd-resolved[1918]: Clock change detected. Flushing caches. Jan 24 00:31:24.093744 systemd[1]: Started sshd@4-172.31.16.136:22-4.153.228.146:56570.service - OpenSSH per-connection server daemon (4.153.228.146:56570). Jan 24 00:31:24.585878 sshd[2257]: Accepted publickey for core from 4.153.228.146 port 56570 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:31:24.587289 sshd[2257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:24.593330 systemd-logind[1960]: New session 5 of user core. Jan 24 00:31:24.598333 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:31:24.898799 sudo[2260]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 00:31:24.899119 sudo[2260]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:31:24.915093 sudo[2260]: pam_unix(sudo:session): session closed for user root Jan 24 00:31:24.993726 sshd[2257]: pam_unix(sshd:session): session closed for user core Jan 24 00:31:24.998059 systemd[1]: sshd@4-172.31.16.136:22-4.153.228.146:56570.service: Deactivated successfully. Jan 24 00:31:24.999915 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:31:25.000679 systemd-logind[1960]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:31:25.002390 systemd-logind[1960]: Removed session 5. Jan 24 00:31:25.079537 systemd[1]: Started sshd@5-172.31.16.136:22-4.153.228.146:46566.service - OpenSSH per-connection server daemon (4.153.228.146:46566). Jan 24 00:31:25.565550 sshd[2265]: Accepted publickey for core from 4.153.228.146 port 46566 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:31:25.567365 sshd[2265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:25.571951 systemd-logind[1960]: New session 6 of user core. Jan 24 00:31:25.582414 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:31:25.841098 sudo[2269]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 00:31:25.841445 sudo[2269]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:31:25.845808 sudo[2269]: pam_unix(sudo:session): session closed for user root Jan 24 00:31:25.852205 sudo[2268]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 24 00:31:25.852530 sudo[2268]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:31:25.865598 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 24 00:31:25.870775 auditctl[2272]: No rules Jan 24 00:31:25.871212 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:31:25.871434 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 24 00:31:25.874468 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:31:25.914130 augenrules[2290]: No rules Jan 24 00:31:25.915664 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:31:25.918413 sudo[2268]: pam_unix(sudo:session): session closed for user root Jan 24 00:31:25.995401 sshd[2265]: pam_unix(sshd:session): session closed for user core Jan 24 00:31:25.998326 systemd[1]: sshd@5-172.31.16.136:22-4.153.228.146:46566.service: Deactivated successfully. Jan 24 00:31:26.000027 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:31:26.002114 systemd-logind[1960]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:31:26.003501 systemd-logind[1960]: Removed session 6. Jan 24 00:31:26.084500 systemd[1]: Started sshd@6-172.31.16.136:22-4.153.228.146:46582.service - OpenSSH per-connection server daemon (4.153.228.146:46582). Jan 24 00:31:26.566746 sshd[2298]: Accepted publickey for core from 4.153.228.146 port 46582 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:31:26.568170 sshd[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:31:26.572414 systemd-logind[1960]: New session 7 of user core. Jan 24 00:31:26.578415 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:31:26.842734 sudo[2301]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:31:26.843028 sudo[2301]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:31:27.682515 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 00:31:27.694708 (dockerd)[2316]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 00:31:28.306945 dockerd[2316]: time="2026-01-24T00:31:28.306875061Z" level=info msg="Starting up" Jan 24 00:31:28.462187 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1257638313-merged.mount: Deactivated successfully. Jan 24 00:31:28.520610 dockerd[2316]: time="2026-01-24T00:31:28.520558449Z" level=info msg="Loading containers: start." Jan 24 00:31:28.676198 kernel: Initializing XFRM netlink socket Jan 24 00:31:28.733824 (udev-worker)[2345]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:31:28.789590 systemd-networkd[1622]: docker0: Link UP Jan 24 00:31:28.811103 dockerd[2316]: time="2026-01-24T00:31:28.811052825Z" level=info msg="Loading containers: done." Jan 24 00:31:28.844319 dockerd[2316]: time="2026-01-24T00:31:28.844258197Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 00:31:28.844478 dockerd[2316]: time="2026-01-24T00:31:28.844379624Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 24 00:31:28.844515 dockerd[2316]: time="2026-01-24T00:31:28.844489095Z" level=info msg="Daemon has completed initialization" Jan 24 00:31:28.877000 dockerd[2316]: time="2026-01-24T00:31:28.876918589Z" level=info msg="API listen on /run/docker.sock" Jan 24 00:31:28.877341 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 00:31:29.458732 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1473406061-merged.mount: Deactivated successfully. Jan 24 00:31:31.104420 containerd[1990]: time="2026-01-24T00:31:31.104375007Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 24 00:31:32.127387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2108104442.mount: Deactivated successfully. Jan 24 00:31:33.411786 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 00:31:33.420498 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:31:33.921970 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:31:33.926405 (kubelet)[2523]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:31:34.005979 kubelet[2523]: E0124 00:31:34.005925 2523 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:31:34.012609 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:31:34.012801 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:31:34.318231 containerd[1990]: time="2026-01-24T00:31:34.318075737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:31:34.319390 containerd[1990]: time="2026-01-24T00:31:34.319348455Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114712" Jan 24 00:31:34.321775 containerd[1990]: time="2026-01-24T00:31:34.320479786Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:31:34.323896 containerd[1990]: time="2026-01-24T00:31:34.323859273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:31:34.324839 containerd[1990]: time="2026-01-24T00:31:34.324801975Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 3.220390472s" Jan 24 00:31:34.324935 containerd[1990]: time="2026-01-24T00:31:34.324842434Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 24 00:31:34.325749 containerd[1990]: time="2026-01-24T00:31:34.325712674Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 24 00:31:36.294397 containerd[1990]: time="2026-01-24T00:31:36.294336277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:31:36.295883 containerd[1990]: time="2026-01-24T00:31:36.295599716Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016781" Jan 24 00:31:36.299168 containerd[1990]: time="2026-01-24T00:31:36.297568590Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:31:36.305228 containerd[1990]: time="2026-01-24T00:31:36.305173203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:31:36.306642 containerd[1990]: time="2026-01-24T00:31:36.306596393Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.980762899s" Jan 24 00:31:36.306809 containerd[1990]: time="2026-01-24T00:31:36.306784708Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 24 00:31:36.307834 containerd[1990]: time="2026-01-24T00:31:36.307796862Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 24 00:31:38.058214 containerd[1990]: time="2026-01-24T00:31:38.058169869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:31:38.059495 containerd[1990]: time="2026-01-24T00:31:38.059451848Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158102" Jan 24 00:31:38.060724 containerd[1990]: time="2026-01-24T00:31:38.060356787Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:31:38.063212 containerd[1990]: time="2026-01-24T00:31:38.063180148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:31:38.064416 containerd[1990]: time="2026-01-24T00:31:38.064379734Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.756548484s" Jan 24 00:31:38.064501 containerd[1990]: time="2026-01-24T00:31:38.064422725Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 24 00:31:38.065165 containerd[1990]: time="2026-01-24T00:31:38.065091781Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 24 00:31:39.372548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2663213841.mount: Deactivated successfully. Jan 24 00:31:39.978741 containerd[1990]: time="2026-01-24T00:31:39.978633579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:31:39.979822 containerd[1990]: time="2026-01-24T00:31:39.979677525Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Jan 24 00:31:39.981122 containerd[1990]: time="2026-01-24T00:31:39.980988471Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:31:39.983311 containerd[1990]: time="2026-01-24T00:31:39.983280558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:31:39.983756 containerd[1990]: time="2026-01-24T00:31:39.983728237Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.91860233s" Jan 24 00:31:39.983808 containerd[1990]: time="2026-01-24T00:31:39.983762262Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 24 00:31:39.984632 containerd[1990]: time="2026-01-24T00:31:39.984590673Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 24 00:31:40.446468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount416471248.mount: Deactivated successfully. Jan 24 00:31:41.666193 containerd[1990]: time="2026-01-24T00:31:41.666122147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:31:41.667307 containerd[1990]: time="2026-01-24T00:31:41.667161897Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jan 24 00:31:41.668480 containerd[1990]: time="2026-01-24T00:31:41.668425551Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:31:41.672707 containerd[1990]: time="2026-01-24T00:31:41.671336017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:31:41.672707 containerd[1990]: time="2026-01-24T00:31:41.672333637Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.68770919s" Jan 24 00:31:41.672707 containerd[1990]: time="2026-01-24T00:31:41.672364291Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 24 00:31:41.673310 containerd[1990]: time="2026-01-24T00:31:41.673280305Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 24 00:31:42.134303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount726938556.mount: Deactivated successfully. Jan 24 00:31:42.140352 containerd[1990]: time="2026-01-24T00:31:42.140297301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:31:42.141160 containerd[1990]: time="2026-01-24T00:31:42.141112389Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 24 00:31:42.143442 containerd[1990]: time="2026-01-24T00:31:42.142280512Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:31:42.144822 containerd[1990]: time="2026-01-24T00:31:42.144652319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:31:42.145799 containerd[1990]: time="2026-01-24T00:31:42.145761815Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 472.450969ms" Jan 24 00:31:42.145799 containerd[1990]: time="2026-01-24T00:31:42.145797402Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 24 00:31:42.146850 containerd[1990]: time="2026-01-24T00:31:42.146678472Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 24 00:31:42.641418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4054401526.mount: Deactivated successfully. Jan 24 00:31:44.161850 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 24 00:31:44.171064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:31:44.510299 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:31:44.516788 (kubelet)[2662]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:31:44.634128 kubelet[2662]: E0124 00:31:44.633979 2662 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:31:44.638999 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:31:44.639681 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:31:45.647686 containerd[1990]: time="2026-01-24T00:31:45.647619393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:31:45.649727 containerd[1990]: time="2026-01-24T00:31:45.649646764Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Jan 24 00:31:45.651180 containerd[1990]: time="2026-01-24T00:31:45.651128680Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:31:45.655313 containerd[1990]: time="2026-01-24T00:31:45.654338641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:31:45.656519 containerd[1990]: time="2026-01-24T00:31:45.656482118Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.509771159s" Jan 24 00:31:45.656672 containerd[1990]: time="2026-01-24T00:31:45.656640227Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 24 00:31:47.424693 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 24 00:31:50.091704 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:31:50.098508 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:31:50.133244 systemd[1]: Reloading requested from client PID 2704 ('systemctl') (unit session-7.scope)... Jan 24 00:31:50.133263 systemd[1]: Reloading... Jan 24 00:31:50.268184 zram_generator::config[2742]: No configuration found. Jan 24 00:31:50.425378 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:31:50.510669 systemd[1]: Reloading finished in 376 ms. Jan 24 00:31:50.564640 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 00:31:50.564744 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 00:31:50.565037 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:31:50.571547 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:31:50.766279 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:31:50.775525 (kubelet)[2806]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:31:50.828171 kubelet[2806]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:31:50.828171 kubelet[2806]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:31:50.828171 kubelet[2806]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:31:50.832890 kubelet[2806]: I0124 00:31:50.832266 2806 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:31:51.259586 kubelet[2806]: I0124 00:31:51.259545 2806 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 24 00:31:51.259586 kubelet[2806]: I0124 00:31:51.259574 2806 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:31:51.259841 kubelet[2806]: I0124 00:31:51.259826 2806 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 00:31:51.302077 kubelet[2806]: I0124 00:31:51.302023 2806 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:31:51.307964 kubelet[2806]: E0124 00:31:51.307657 2806 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.16.136:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.136:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 24 00:31:51.330119 kubelet[2806]: E0124 00:31:51.330060 2806 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:31:51.330119 kubelet[2806]: I0124 00:31:51.330098 2806 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:31:51.336771 kubelet[2806]: I0124 00:31:51.336740 2806 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:31:51.338767 kubelet[2806]: I0124 00:31:51.338689 2806 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:31:51.342526 kubelet[2806]: I0124 00:31:51.338746 2806 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-136","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:31:51.343819 kubelet[2806]: I0124 00:31:51.343793 2806 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:31:51.343819 kubelet[2806]: I0124 00:31:51.343823 2806 container_manager_linux.go:303] "Creating device plugin manager" Jan 24 00:31:51.345038 kubelet[2806]: I0124 00:31:51.345010 2806 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:31:51.351272 kubelet[2806]: I0124 00:31:51.351050 2806 kubelet.go:480] "Attempting to sync node with API server" Jan 24 00:31:51.351272 kubelet[2806]: I0124 00:31:51.351090 2806 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:31:51.351994 kubelet[2806]: I0124 00:31:51.351968 2806 kubelet.go:386] "Adding apiserver pod source" Jan 24 00:31:51.351994 kubelet[2806]: I0124 00:31:51.351994 2806 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:31:51.354239 kubelet[2806]: E0124 00:31:51.353817 2806 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-136&limit=500&resourceVersion=0\": dial tcp 172.31.16.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 00:31:51.365715 kubelet[2806]: E0124 00:31:51.365420 2806 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.136:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 00:31:51.366024 kubelet[2806]: I0124 00:31:51.365886 2806 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:31:51.366552 kubelet[2806]: I0124 00:31:51.366527 2806 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 00:31:51.367585 kubelet[2806]: W0124 00:31:51.367557 2806 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:31:51.372458 kubelet[2806]: I0124 00:31:51.372416 2806 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:31:51.372566 kubelet[2806]: I0124 00:31:51.372479 2806 server.go:1289] "Started kubelet" Jan 24 00:31:51.375036 kubelet[2806]: I0124 00:31:51.374862 2806 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:31:51.375036 kubelet[2806]: I0124 00:31:51.374950 2806 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:31:51.375250 kubelet[2806]: I0124 00:31:51.375224 2806 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:31:51.385458 kubelet[2806]: I0124 00:31:51.385214 2806 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:31:51.390134 kubelet[2806]: E0124 00:31:51.383815 2806 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.136:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.136:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-136.188d836802bff65b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-136,UID:ip-172-31-16-136,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-136,},FirstTimestamp:2026-01-24 00:31:51.372445275 +0000 UTC m=+0.592890362,LastTimestamp:2026-01-24 00:31:51.372445275 +0000 UTC m=+0.592890362,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-136,}" Jan 24 00:31:51.392355 kubelet[2806]: I0124 00:31:51.392335 2806 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:31:51.395163 kubelet[2806]: I0124 00:31:51.394680 2806 server.go:317] "Adding debug handlers to kubelet server" Jan 24 00:31:51.395591 kubelet[2806]: I0124 00:31:51.393928 2806 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:31:51.395670 kubelet[2806]: E0124 00:31:51.394087 2806 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-16-136\" not found" Jan 24 00:31:51.395714 kubelet[2806]: I0124 00:31:51.393917 2806 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:31:51.396045 kubelet[2806]: I0124 00:31:51.396035 2806 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:31:51.396254 kubelet[2806]: E0124 00:31:51.396235 2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-136?timeout=10s\": dial tcp 172.31.16.136:6443: connect: connection refused" interval="200ms" Jan 24 00:31:51.396996 kubelet[2806]: I0124 00:31:51.396976 2806 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:31:51.399374 kubelet[2806]: E0124 00:31:51.399330 2806 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 00:31:51.399565 kubelet[2806]: I0124 00:31:51.399554 2806 factory.go:223] Registration of the containerd container factory successfully Jan 24 00:31:51.399628 kubelet[2806]: I0124 00:31:51.399622 2806 factory.go:223] Registration of the systemd container factory successfully Jan 24 00:31:51.412371 kubelet[2806]: I0124 00:31:51.412335 2806 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 24 00:31:51.414286 kubelet[2806]: I0124 00:31:51.414255 2806 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 24 00:31:51.414286 kubelet[2806]: I0124 00:31:51.414279 2806 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 24 00:31:51.414406 kubelet[2806]: I0124 00:31:51.414307 2806 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:31:51.414406 kubelet[2806]: I0124 00:31:51.414315 2806 kubelet.go:2436] "Starting kubelet main sync loop" Jan 24 00:31:51.414406 kubelet[2806]: E0124 00:31:51.414351 2806 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:31:51.421533 kubelet[2806]: E0124 00:31:51.421493 2806 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.16.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 24 00:31:51.430246 kubelet[2806]: I0124 00:31:51.430218 2806 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:31:51.430246 kubelet[2806]: I0124 00:31:51.430236 2806 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:31:51.430246 kubelet[2806]: I0124 00:31:51.430253 2806 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:31:51.435729 kubelet[2806]: I0124 00:31:51.435681 2806 policy_none.go:49] "None policy: Start" Jan 24 00:31:51.435729 kubelet[2806]: I0124 00:31:51.435721 2806 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:31:51.435729 kubelet[2806]: I0124 00:31:51.435737 2806 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:31:51.446696 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 00:31:51.458308 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 00:31:51.461849 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 00:31:51.473100 kubelet[2806]: E0124 00:31:51.472066 2806 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 00:31:51.473100 kubelet[2806]: I0124 00:31:51.472261 2806 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:31:51.473100 kubelet[2806]: I0124 00:31:51.472311 2806 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:31:51.473100 kubelet[2806]: I0124 00:31:51.472583 2806 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:31:51.474760 kubelet[2806]: E0124 00:31:51.474678 2806 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:31:51.474760 kubelet[2806]: E0124 00:31:51.474712 2806 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-136\" not found" Jan 24 00:31:51.529204 systemd[1]: Created slice kubepods-burstable-podd920a6adf7a8907c9ef6d39719840183.slice - libcontainer container kubepods-burstable-podd920a6adf7a8907c9ef6d39719840183.slice. Jan 24 00:31:51.541308 kubelet[2806]: E0124 00:31:51.541061 2806 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-136\" not found" node="ip-172-31-16-136" Jan 24 00:31:51.546855 systemd[1]: Created slice kubepods-burstable-pod5b7bb6df8d7ab5a43ae298959cd03254.slice - libcontainer container kubepods-burstable-pod5b7bb6df8d7ab5a43ae298959cd03254.slice. Jan 24 00:31:51.554864 kubelet[2806]: E0124 00:31:51.554750 2806 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-136\" not found" node="ip-172-31-16-136" Jan 24 00:31:51.556840 systemd[1]: Created slice kubepods-burstable-podf69e30ca159ca01833a943c41c58e0ac.slice - libcontainer container kubepods-burstable-podf69e30ca159ca01833a943c41c58e0ac.slice. Jan 24 00:31:51.559067 kubelet[2806]: E0124 00:31:51.559040 2806 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-136\" not found" node="ip-172-31-16-136" Jan 24 00:31:51.574552 kubelet[2806]: I0124 00:31:51.574509 2806 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-136" Jan 24 00:31:51.574848 kubelet[2806]: E0124 00:31:51.574828 2806 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.136:6443/api/v1/nodes\": dial tcp 172.31.16.136:6443: connect: connection refused" node="ip-172-31-16-136" Jan 24 00:31:51.597308 kubelet[2806]: I0124 00:31:51.597273 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5b7bb6df8d7ab5a43ae298959cd03254-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-136\" (UID: \"5b7bb6df8d7ab5a43ae298959cd03254\") " pod="kube-system/kube-controller-manager-ip-172-31-16-136" Jan 24 00:31:51.597308 kubelet[2806]: I0124 00:31:51.597316 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d920a6adf7a8907c9ef6d39719840183-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-136\" (UID: \"d920a6adf7a8907c9ef6d39719840183\") " pod="kube-system/kube-apiserver-ip-172-31-16-136" Jan 24 00:31:51.597595 kubelet[2806]: I0124 00:31:51.597335 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d920a6adf7a8907c9ef6d39719840183-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-136\" (UID: \"d920a6adf7a8907c9ef6d39719840183\") " pod="kube-system/kube-apiserver-ip-172-31-16-136" Jan 24 00:31:51.597595 kubelet[2806]: I0124 00:31:51.597436 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5b7bb6df8d7ab5a43ae298959cd03254-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-136\" (UID: \"5b7bb6df8d7ab5a43ae298959cd03254\") " pod="kube-system/kube-controller-manager-ip-172-31-16-136" Jan 24 00:31:51.597595 kubelet[2806]: I0124 00:31:51.597458 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5b7bb6df8d7ab5a43ae298959cd03254-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-136\" (UID: \"5b7bb6df8d7ab5a43ae298959cd03254\") " pod="kube-system/kube-controller-manager-ip-172-31-16-136" Jan 24 00:31:51.597595 kubelet[2806]: I0124 00:31:51.597482 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f69e30ca159ca01833a943c41c58e0ac-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-136\" (UID: \"f69e30ca159ca01833a943c41c58e0ac\") " pod="kube-system/kube-scheduler-ip-172-31-16-136" Jan 24 00:31:51.597595 kubelet[2806]: I0124 00:31:51.597500 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d920a6adf7a8907c9ef6d39719840183-ca-certs\") pod \"kube-apiserver-ip-172-31-16-136\" (UID: \"d920a6adf7a8907c9ef6d39719840183\") " pod="kube-system/kube-apiserver-ip-172-31-16-136" Jan 24 00:31:51.597716 kubelet[2806]: I0124 00:31:51.597515 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5b7bb6df8d7ab5a43ae298959cd03254-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-136\" (UID: \"5b7bb6df8d7ab5a43ae298959cd03254\") " pod="kube-system/kube-controller-manager-ip-172-31-16-136" Jan 24 00:31:51.597716 kubelet[2806]: I0124 00:31:51.597529 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5b7bb6df8d7ab5a43ae298959cd03254-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-136\" (UID: \"5b7bb6df8d7ab5a43ae298959cd03254\") " pod="kube-system/kube-controller-manager-ip-172-31-16-136" Jan 24 00:31:51.597871 kubelet[2806]: E0124 00:31:51.597837 2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-136?timeout=10s\": dial tcp 172.31.16.136:6443: connect: connection refused" interval="400ms" Jan 24 00:31:51.776909 kubelet[2806]: I0124 00:31:51.776877 2806 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-136" Jan 24 00:31:51.777531 kubelet[2806]: E0124 00:31:51.777494 2806 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.136:6443/api/v1/nodes\": dial tcp 172.31.16.136:6443: connect: connection refused" node="ip-172-31-16-136" Jan 24 00:31:51.842552 containerd[1990]: time="2026-01-24T00:31:51.842425826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-136,Uid:d920a6adf7a8907c9ef6d39719840183,Namespace:kube-system,Attempt:0,}" Jan 24 00:31:51.864063 containerd[1990]: time="2026-01-24T00:31:51.863549088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-136,Uid:f69e30ca159ca01833a943c41c58e0ac,Namespace:kube-system,Attempt:0,}" Jan 24 00:31:51.864063 containerd[1990]: time="2026-01-24T00:31:51.863553347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-136,Uid:5b7bb6df8d7ab5a43ae298959cd03254,Namespace:kube-system,Attempt:0,}" Jan 24 00:31:51.998630 kubelet[2806]: E0124 00:31:51.998579 2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-136?timeout=10s\": dial tcp 172.31.16.136:6443: connect: connection refused" interval="800ms" Jan 24 00:31:52.179853 kubelet[2806]: I0124 00:31:52.179808 2806 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-136" Jan 24 00:31:52.180151 kubelet[2806]: E0124 00:31:52.180086 2806 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.136:6443/api/v1/nodes\": dial tcp 172.31.16.136:6443: connect: connection refused" node="ip-172-31-16-136" Jan 24 00:31:52.351032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2116314318.mount: Deactivated successfully. Jan 24 00:31:52.364454 kubelet[2806]: E0124 00:31:52.364415 2806 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 00:31:52.370177 containerd[1990]: time="2026-01-24T00:31:52.369993825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:31:52.372167 containerd[1990]: time="2026-01-24T00:31:52.372098010Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:31:52.374128 containerd[1990]: time="2026-01-24T00:31:52.373873754Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 24 00:31:52.376160 containerd[1990]: time="2026-01-24T00:31:52.376047093Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:31:52.378304 containerd[1990]: time="2026-01-24T00:31:52.378260219Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:31:52.380598 containerd[1990]: time="2026-01-24T00:31:52.380558063Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:31:52.382319 containerd[1990]: time="2026-01-24T00:31:52.382265659Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:31:52.388082 containerd[1990]: time="2026-01-24T00:31:52.388022932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:31:52.389038 containerd[1990]: time="2026-01-24T00:31:52.388812183Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 546.304067ms" Jan 24 00:31:52.391269 containerd[1990]: time="2026-01-24T00:31:52.391125232Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 527.22316ms" Jan 24 00:31:52.393169 containerd[1990]: time="2026-01-24T00:31:52.393099133Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 529.456108ms" Jan 24 00:31:52.584559 kubelet[2806]: E0124 00:31:52.584413 2806 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.16.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 24 00:31:52.656303 containerd[1990]: time="2026-01-24T00:31:52.655963558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:31:52.656303 containerd[1990]: time="2026-01-24T00:31:52.656056476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:31:52.656303 containerd[1990]: time="2026-01-24T00:31:52.656079323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:31:52.656303 containerd[1990]: time="2026-01-24T00:31:52.656237778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:31:52.658764 containerd[1990]: time="2026-01-24T00:31:52.658471115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:31:52.658764 containerd[1990]: time="2026-01-24T00:31:52.658540353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:31:52.658764 containerd[1990]: time="2026-01-24T00:31:52.658563853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:31:52.659570 containerd[1990]: time="2026-01-24T00:31:52.659318813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:31:52.668546 containerd[1990]: time="2026-01-24T00:31:52.668443418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:31:52.668546 containerd[1990]: time="2026-01-24T00:31:52.668513235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:31:52.669174 containerd[1990]: time="2026-01-24T00:31:52.668767286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:31:52.669174 containerd[1990]: time="2026-01-24T00:31:52.668891320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:31:52.700752 kubelet[2806]: E0124 00:31:52.700590 2806 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-136&limit=500&resourceVersion=0\": dial tcp 172.31.16.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 00:31:52.705437 systemd[1]: Started cri-containerd-3013c12e8553367f3c326b4bec3c0bb2a43c0fcc4e50f51f37ab32f5a4aba68c.scope - libcontainer container 3013c12e8553367f3c326b4bec3c0bb2a43c0fcc4e50f51f37ab32f5a4aba68c. Jan 24 00:31:52.708386 systemd[1]: Started cri-containerd-b52fe91fd6fc3e1b49b4a161be02a41a8888794f620a4f9e051aa2d8e0c4e5dc.scope - libcontainer container b52fe91fd6fc3e1b49b4a161be02a41a8888794f620a4f9e051aa2d8e0c4e5dc. Jan 24 00:31:52.711237 systemd[1]: Started cri-containerd-c436aca7a03f2a10c1401ae69755925302829fa093429061b40aa6cf0968bce7.scope - libcontainer container c436aca7a03f2a10c1401ae69755925302829fa093429061b40aa6cf0968bce7. Jan 24 00:31:52.791673 containerd[1990]: time="2026-01-24T00:31:52.791628372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-136,Uid:d920a6adf7a8907c9ef6d39719840183,Namespace:kube-system,Attempt:0,} returns sandbox id \"3013c12e8553367f3c326b4bec3c0bb2a43c0fcc4e50f51f37ab32f5a4aba68c\"" Jan 24 00:31:52.799255 kubelet[2806]: E0124 00:31:52.799209 2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-136?timeout=10s\": dial tcp 172.31.16.136:6443: connect: connection refused" interval="1.6s" Jan 24 00:31:52.800750 containerd[1990]: time="2026-01-24T00:31:52.800709731Z" level=info msg="CreateContainer within sandbox \"3013c12e8553367f3c326b4bec3c0bb2a43c0fcc4e50f51f37ab32f5a4aba68c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 00:31:52.828162 containerd[1990]: time="2026-01-24T00:31:52.827791423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-136,Uid:5b7bb6df8d7ab5a43ae298959cd03254,Namespace:kube-system,Attempt:0,} returns sandbox id \"b52fe91fd6fc3e1b49b4a161be02a41a8888794f620a4f9e051aa2d8e0c4e5dc\"" Jan 24 00:31:52.840119 containerd[1990]: time="2026-01-24T00:31:52.840007050Z" level=info msg="CreateContainer within sandbox \"b52fe91fd6fc3e1b49b4a161be02a41a8888794f620a4f9e051aa2d8e0c4e5dc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 00:31:52.847559 containerd[1990]: time="2026-01-24T00:31:52.847433197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-136,Uid:f69e30ca159ca01833a943c41c58e0ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"c436aca7a03f2a10c1401ae69755925302829fa093429061b40aa6cf0968bce7\"" Jan 24 00:31:52.855861 containerd[1990]: time="2026-01-24T00:31:52.855803124Z" level=info msg="CreateContainer within sandbox \"c436aca7a03f2a10c1401ae69755925302829fa093429061b40aa6cf0968bce7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 00:31:52.868341 kubelet[2806]: E0124 00:31:52.868273 2806 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.136:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 00:31:52.872042 containerd[1990]: time="2026-01-24T00:31:52.871987567Z" level=info msg="CreateContainer within sandbox \"3013c12e8553367f3c326b4bec3c0bb2a43c0fcc4e50f51f37ab32f5a4aba68c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"68ccb05f841a4f39e77a24cf7e1004d70a3e92881c7939b6df4c583ee6e0bb21\"" Jan 24 00:31:52.872692 containerd[1990]: time="2026-01-24T00:31:52.872662662Z" level=info msg="StartContainer for \"68ccb05f841a4f39e77a24cf7e1004d70a3e92881c7939b6df4c583ee6e0bb21\"" Jan 24 00:31:52.882787 containerd[1990]: time="2026-01-24T00:31:52.882623728Z" level=info msg="CreateContainer within sandbox \"b52fe91fd6fc3e1b49b4a161be02a41a8888794f620a4f9e051aa2d8e0c4e5dc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f50649f4e6a8d40f179847794a86f29cb80c3b6ad724e11030ac48f6181f76af\"" Jan 24 00:31:52.883354 containerd[1990]: time="2026-01-24T00:31:52.883320825Z" level=info msg="StartContainer for \"f50649f4e6a8d40f179847794a86f29cb80c3b6ad724e11030ac48f6181f76af\"" Jan 24 00:31:52.901752 containerd[1990]: time="2026-01-24T00:31:52.901701637Z" level=info msg="CreateContainer within sandbox \"c436aca7a03f2a10c1401ae69755925302829fa093429061b40aa6cf0968bce7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c479483f183ae789c3cd336fae89b6744da0461ceb544ec2e7ab609c63079d04\"" Jan 24 00:31:52.902299 containerd[1990]: time="2026-01-24T00:31:52.902264372Z" level=info msg="StartContainer for \"c479483f183ae789c3cd336fae89b6744da0461ceb544ec2e7ab609c63079d04\"" Jan 24 00:31:52.919398 systemd[1]: Started cri-containerd-68ccb05f841a4f39e77a24cf7e1004d70a3e92881c7939b6df4c583ee6e0bb21.scope - libcontainer container 68ccb05f841a4f39e77a24cf7e1004d70a3e92881c7939b6df4c583ee6e0bb21. Jan 24 00:31:52.949863 systemd[1]: Started cri-containerd-f50649f4e6a8d40f179847794a86f29cb80c3b6ad724e11030ac48f6181f76af.scope - libcontainer container f50649f4e6a8d40f179847794a86f29cb80c3b6ad724e11030ac48f6181f76af. Jan 24 00:31:52.963378 systemd[1]: Started cri-containerd-c479483f183ae789c3cd336fae89b6744da0461ceb544ec2e7ab609c63079d04.scope - libcontainer container c479483f183ae789c3cd336fae89b6744da0461ceb544ec2e7ab609c63079d04. Jan 24 00:31:52.985449 kubelet[2806]: I0124 00:31:52.985416 2806 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-136" Jan 24 00:31:52.985846 kubelet[2806]: E0124 00:31:52.985808 2806 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.136:6443/api/v1/nodes\": dial tcp 172.31.16.136:6443: connect: connection refused" node="ip-172-31-16-136" Jan 24 00:31:53.024939 containerd[1990]: time="2026-01-24T00:31:53.024685039Z" level=info msg="StartContainer for \"68ccb05f841a4f39e77a24cf7e1004d70a3e92881c7939b6df4c583ee6e0bb21\" returns successfully" Jan 24 00:31:53.041756 containerd[1990]: time="2026-01-24T00:31:53.041226141Z" level=info msg="StartContainer for \"f50649f4e6a8d40f179847794a86f29cb80c3b6ad724e11030ac48f6181f76af\" returns successfully" Jan 24 00:31:53.071305 containerd[1990]: time="2026-01-24T00:31:53.071056068Z" level=info msg="StartContainer for \"c479483f183ae789c3cd336fae89b6744da0461ceb544ec2e7ab609c63079d04\" returns successfully" Jan 24 00:31:53.429170 kubelet[2806]: E0124 00:31:53.428750 2806 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.16.136:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.136:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 24 00:31:53.440746 kubelet[2806]: E0124 00:31:53.438941 2806 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-136\" not found" node="ip-172-31-16-136" Jan 24 00:31:53.441988 kubelet[2806]: E0124 00:31:53.441491 2806 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-136\" not found" node="ip-172-31-16-136" Jan 24 00:31:53.446376 kubelet[2806]: E0124 00:31:53.446353 2806 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-136\" not found" node="ip-172-31-16-136" Jan 24 00:31:54.053663 kubelet[2806]: E0124 00:31:54.053617 2806 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.136:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 00:31:54.400369 kubelet[2806]: E0124 00:31:54.400242 2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-136?timeout=10s\": dial tcp 172.31.16.136:6443: connect: connection refused" interval="3.2s" Jan 24 00:31:54.450033 kubelet[2806]: E0124 00:31:54.449830 2806 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-136\" not found" node="ip-172-31-16-136" Jan 24 00:31:54.450774 kubelet[2806]: E0124 00:31:54.450605 2806 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-136\" not found" node="ip-172-31-16-136" Jan 24 00:31:54.588269 kubelet[2806]: I0124 00:31:54.587906 2806 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-136" Jan 24 00:31:54.588572 kubelet[2806]: E0124 00:31:54.588548 2806 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.136:6443/api/v1/nodes\": dial tcp 172.31.16.136:6443: connect: connection refused" node="ip-172-31-16-136" Jan 24 00:31:57.255802 kubelet[2806]: E0124 00:31:57.255760 2806 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-16-136" not found Jan 24 00:31:57.358833 kubelet[2806]: I0124 00:31:57.358780 2806 apiserver.go:52] "Watching apiserver" Jan 24 00:31:57.396417 kubelet[2806]: I0124 00:31:57.396377 2806 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:31:57.599445 kubelet[2806]: E0124 00:31:57.599340 2806 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-16-136" not found Jan 24 00:31:57.603960 kubelet[2806]: E0124 00:31:57.603891 2806 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-136\" not found" node="ip-172-31-16-136" Jan 24 00:31:57.791017 kubelet[2806]: I0124 00:31:57.790981 2806 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-136" Jan 24 00:31:57.797694 kubelet[2806]: I0124 00:31:57.797513 2806 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-136" Jan 24 00:31:57.797694 kubelet[2806]: E0124 00:31:57.797548 2806 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-16-136\": node \"ip-172-31-16-136\" not found" Jan 24 00:31:57.896508 kubelet[2806]: I0124 00:31:57.896363 2806 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-136" Jan 24 00:31:57.907866 kubelet[2806]: I0124 00:31:57.907752 2806 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-136" Jan 24 00:31:57.914433 kubelet[2806]: I0124 00:31:57.914396 2806 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-136" Jan 24 00:31:58.759180 systemd[1]: Reloading requested from client PID 3094 ('systemctl') (unit session-7.scope)... Jan 24 00:31:58.759199 systemd[1]: Reloading... Jan 24 00:31:58.873178 zram_generator::config[3131]: No configuration found. Jan 24 00:31:59.006952 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:31:59.110648 systemd[1]: Reloading finished in 350 ms. Jan 24 00:31:59.160657 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:31:59.184523 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:31:59.184753 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:31:59.189833 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:31:59.425419 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:31:59.431531 (kubelet)[3194]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:31:59.493632 kubelet[3194]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:31:59.493632 kubelet[3194]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:31:59.493632 kubelet[3194]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:31:59.494115 kubelet[3194]: I0124 00:31:59.493735 3194 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:31:59.504189 kubelet[3194]: I0124 00:31:59.502893 3194 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 24 00:31:59.504189 kubelet[3194]: I0124 00:31:59.503026 3194 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:31:59.504189 kubelet[3194]: I0124 00:31:59.503274 3194 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 00:31:59.504566 kubelet[3194]: I0124 00:31:59.504527 3194 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 24 00:31:59.509499 kubelet[3194]: I0124 00:31:59.509455 3194 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:31:59.520644 kubelet[3194]: E0124 00:31:59.520571 3194 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:31:59.521015 kubelet[3194]: I0124 00:31:59.520789 3194 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:31:59.523852 kubelet[3194]: I0124 00:31:59.523822 3194 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:31:59.524757 kubelet[3194]: I0124 00:31:59.524265 3194 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:31:59.524757 kubelet[3194]: I0124 00:31:59.524298 3194 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-136","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:31:59.524757 kubelet[3194]: I0124 00:31:59.524648 3194 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:31:59.524757 kubelet[3194]: I0124 00:31:59.524663 3194 container_manager_linux.go:303] "Creating device plugin manager" Jan 24 00:31:59.526917 kubelet[3194]: I0124 00:31:59.526893 3194 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:31:59.530203 kubelet[3194]: I0124 00:31:59.530184 3194 kubelet.go:480] "Attempting to sync node with API server" Jan 24 00:31:59.530381 kubelet[3194]: I0124 00:31:59.530367 3194 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:31:59.530481 kubelet[3194]: I0124 00:31:59.530472 3194 kubelet.go:386] "Adding apiserver pod source" Jan 24 00:31:59.530567 kubelet[3194]: I0124 00:31:59.530557 3194 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:31:59.538121 kubelet[3194]: I0124 00:31:59.538088 3194 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:31:59.540074 kubelet[3194]: I0124 00:31:59.538788 3194 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 00:31:59.546972 kubelet[3194]: I0124 00:31:59.546927 3194 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:31:59.547231 kubelet[3194]: I0124 00:31:59.547214 3194 server.go:1289] "Started kubelet" Jan 24 00:31:59.548257 kubelet[3194]: I0124 00:31:59.548221 3194 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:31:59.549513 kubelet[3194]: I0124 00:31:59.549493 3194 server.go:317] "Adding debug handlers to kubelet server" Jan 24 00:31:59.556176 kubelet[3194]: I0124 00:31:59.556099 3194 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:31:59.559548 kubelet[3194]: I0124 00:31:59.549456 3194 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:31:59.559672 kubelet[3194]: I0124 00:31:59.559604 3194 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:31:59.560731 kubelet[3194]: I0124 00:31:59.560706 3194 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:31:59.566668 kubelet[3194]: I0124 00:31:59.566644 3194 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:31:59.567292 kubelet[3194]: I0124 00:31:59.567274 3194 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:31:59.567554 kubelet[3194]: I0124 00:31:59.567542 3194 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:31:59.571092 kubelet[3194]: I0124 00:31:59.571062 3194 factory.go:223] Registration of the systemd container factory successfully Jan 24 00:31:59.571224 kubelet[3194]: I0124 00:31:59.571194 3194 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:31:59.575044 kubelet[3194]: E0124 00:31:59.574904 3194 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:31:59.575535 kubelet[3194]: I0124 00:31:59.575212 3194 factory.go:223] Registration of the containerd container factory successfully Jan 24 00:31:59.599280 kubelet[3194]: I0124 00:31:59.599209 3194 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 24 00:31:59.602322 kubelet[3194]: I0124 00:31:59.602292 3194 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 24 00:31:59.602322 kubelet[3194]: I0124 00:31:59.602321 3194 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 24 00:31:59.602480 kubelet[3194]: I0124 00:31:59.602345 3194 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:31:59.602480 kubelet[3194]: I0124 00:31:59.602353 3194 kubelet.go:2436] "Starting kubelet main sync loop" Jan 24 00:31:59.602480 kubelet[3194]: E0124 00:31:59.602400 3194 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:31:59.639861 kubelet[3194]: I0124 00:31:59.639837 3194 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:31:59.639861 kubelet[3194]: I0124 00:31:59.639856 3194 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:31:59.640066 kubelet[3194]: I0124 00:31:59.639878 3194 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:31:59.640066 kubelet[3194]: I0124 00:31:59.640038 3194 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 00:31:59.640066 kubelet[3194]: I0124 00:31:59.640051 3194 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 00:31:59.640243 kubelet[3194]: I0124 00:31:59.640072 3194 policy_none.go:49] "None policy: Start" Jan 24 00:31:59.640243 kubelet[3194]: I0124 00:31:59.640085 3194 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:31:59.640243 kubelet[3194]: I0124 00:31:59.640097 3194 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:31:59.641717 kubelet[3194]: I0124 00:31:59.641685 3194 state_mem.go:75] "Updated machine memory state" Jan 24 00:31:59.645969 kubelet[3194]: E0124 00:31:59.645940 3194 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 00:31:59.646153 kubelet[3194]: I0124 00:31:59.646124 3194 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:31:59.646227 kubelet[3194]: I0124 00:31:59.646171 3194 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:31:59.646843 kubelet[3194]: I0124 00:31:59.646800 3194 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:31:59.650318 kubelet[3194]: E0124 00:31:59.650013 3194 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:31:59.704254 kubelet[3194]: I0124 00:31:59.704069 3194 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-136" Jan 24 00:31:59.705468 kubelet[3194]: I0124 00:31:59.704711 3194 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-136" Jan 24 00:31:59.705468 kubelet[3194]: I0124 00:31:59.705057 3194 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-136" Jan 24 00:31:59.714490 kubelet[3194]: E0124 00:31:59.714398 3194 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-136\" already exists" pod="kube-system/kube-scheduler-ip-172-31-16-136" Jan 24 00:31:59.714490 kubelet[3194]: E0124 00:31:59.714398 3194 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-16-136\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-16-136" Jan 24 00:31:59.715634 kubelet[3194]: E0124 00:31:59.715576 3194 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-136\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-136" Jan 24 00:31:59.756305 kubelet[3194]: I0124 00:31:59.755264 3194 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-136" Jan 24 00:31:59.762632 kubelet[3194]: I0124 00:31:59.762359 3194 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-16-136" Jan 24 00:31:59.762632 kubelet[3194]: I0124 00:31:59.762428 3194 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-136" Jan 24 00:31:59.769273 kubelet[3194]: I0124 00:31:59.769240 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f69e30ca159ca01833a943c41c58e0ac-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-136\" (UID: \"f69e30ca159ca01833a943c41c58e0ac\") " pod="kube-system/kube-scheduler-ip-172-31-16-136" Jan 24 00:31:59.769273 kubelet[3194]: I0124 00:31:59.769269 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d920a6adf7a8907c9ef6d39719840183-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-136\" (UID: \"d920a6adf7a8907c9ef6d39719840183\") " pod="kube-system/kube-apiserver-ip-172-31-16-136" Jan 24 00:31:59.770040 kubelet[3194]: I0124 00:31:59.769284 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5b7bb6df8d7ab5a43ae298959cd03254-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-136\" (UID: \"5b7bb6df8d7ab5a43ae298959cd03254\") " pod="kube-system/kube-controller-manager-ip-172-31-16-136" Jan 24 00:31:59.770040 kubelet[3194]: I0124 00:31:59.769303 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5b7bb6df8d7ab5a43ae298959cd03254-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-136\" (UID: \"5b7bb6df8d7ab5a43ae298959cd03254\") " pod="kube-system/kube-controller-manager-ip-172-31-16-136" Jan 24 00:31:59.770040 kubelet[3194]: I0124 00:31:59.769329 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5b7bb6df8d7ab5a43ae298959cd03254-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-136\" (UID: \"5b7bb6df8d7ab5a43ae298959cd03254\") " pod="kube-system/kube-controller-manager-ip-172-31-16-136" Jan 24 00:31:59.770040 kubelet[3194]: I0124 00:31:59.769368 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d920a6adf7a8907c9ef6d39719840183-ca-certs\") pod \"kube-apiserver-ip-172-31-16-136\" (UID: \"d920a6adf7a8907c9ef6d39719840183\") " pod="kube-system/kube-apiserver-ip-172-31-16-136" Jan 24 00:31:59.770040 kubelet[3194]: I0124 00:31:59.769388 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d920a6adf7a8907c9ef6d39719840183-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-136\" (UID: \"d920a6adf7a8907c9ef6d39719840183\") " pod="kube-system/kube-apiserver-ip-172-31-16-136" Jan 24 00:31:59.770205 kubelet[3194]: I0124 00:31:59.769403 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5b7bb6df8d7ab5a43ae298959cd03254-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-136\" (UID: \"5b7bb6df8d7ab5a43ae298959cd03254\") " pod="kube-system/kube-controller-manager-ip-172-31-16-136" Jan 24 00:31:59.770205 kubelet[3194]: I0124 00:31:59.769420 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5b7bb6df8d7ab5a43ae298959cd03254-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-136\" (UID: \"5b7bb6df8d7ab5a43ae298959cd03254\") " pod="kube-system/kube-controller-manager-ip-172-31-16-136" Jan 24 00:32:00.535292 kubelet[3194]: I0124 00:32:00.535248 3194 apiserver.go:52] "Watching apiserver" Jan 24 00:32:00.568379 kubelet[3194]: I0124 00:32:00.568326 3194 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:32:00.624232 kubelet[3194]: I0124 00:32:00.624021 3194 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-136" Jan 24 00:32:00.625933 kubelet[3194]: I0124 00:32:00.625915 3194 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-136" Jan 24 00:32:00.636485 kubelet[3194]: E0124 00:32:00.636449 3194 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-136\" already exists" pod="kube-system/kube-scheduler-ip-172-31-16-136" Jan 24 00:32:00.639168 kubelet[3194]: E0124 00:32:00.638549 3194 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-136\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-136" Jan 24 00:32:00.653870 kubelet[3194]: I0124 00:32:00.653203 3194 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-136" podStartSLOduration=3.653187806 podStartE2EDuration="3.653187806s" podCreationTimestamp="2026-01-24 00:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:32:00.65243949 +0000 UTC m=+1.206395429" watchObservedRunningTime="2026-01-24 00:32:00.653187806 +0000 UTC m=+1.207143722" Jan 24 00:32:00.672833 kubelet[3194]: I0124 00:32:00.672753 3194 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-136" podStartSLOduration=3.672738856 podStartE2EDuration="3.672738856s" podCreationTimestamp="2026-01-24 00:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:32:00.661930764 +0000 UTC m=+1.215886676" watchObservedRunningTime="2026-01-24 00:32:00.672738856 +0000 UTC m=+1.226694752" Jan 24 00:32:00.672998 kubelet[3194]: I0124 00:32:00.672864 3194 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-136" podStartSLOduration=3.6728596319999998 podStartE2EDuration="3.672859632s" podCreationTimestamp="2026-01-24 00:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:32:00.672674224 +0000 UTC m=+1.226630139" watchObservedRunningTime="2026-01-24 00:32:00.672859632 +0000 UTC m=+1.226815544" Jan 24 00:32:02.628225 update_engine[1961]: I20260124 00:32:02.626708 1961 update_attempter.cc:509] Updating boot flags... Jan 24 00:32:02.743273 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3256) Jan 24 00:32:02.943162 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3256) Jan 24 00:32:05.611634 kubelet[3194]: I0124 00:32:05.611572 3194 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 00:32:05.613039 containerd[1990]: time="2026-01-24T00:32:05.612680342Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:32:05.613710 kubelet[3194]: I0124 00:32:05.613017 3194 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 00:32:06.755518 systemd[1]: Created slice kubepods-besteffort-pod2e9168bf_07a3_42ae_a5eb_17fff9a7a87a.slice - libcontainer container kubepods-besteffort-pod2e9168bf_07a3_42ae_a5eb_17fff9a7a87a.slice. Jan 24 00:32:06.822751 kubelet[3194]: I0124 00:32:06.822093 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e9168bf-07a3-42ae-a5eb-17fff9a7a87a-lib-modules\") pod \"kube-proxy-6kncx\" (UID: \"2e9168bf-07a3-42ae-a5eb-17fff9a7a87a\") " pod="kube-system/kube-proxy-6kncx" Jan 24 00:32:06.822751 kubelet[3194]: I0124 00:32:06.822186 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e9168bf-07a3-42ae-a5eb-17fff9a7a87a-xtables-lock\") pod \"kube-proxy-6kncx\" (UID: \"2e9168bf-07a3-42ae-a5eb-17fff9a7a87a\") " pod="kube-system/kube-proxy-6kncx" Jan 24 00:32:06.822751 kubelet[3194]: I0124 00:32:06.822213 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5pv8\" (UniqueName: \"kubernetes.io/projected/2e9168bf-07a3-42ae-a5eb-17fff9a7a87a-kube-api-access-j5pv8\") pod \"kube-proxy-6kncx\" (UID: \"2e9168bf-07a3-42ae-a5eb-17fff9a7a87a\") " pod="kube-system/kube-proxy-6kncx" Jan 24 00:32:06.822751 kubelet[3194]: I0124 00:32:06.822248 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2e9168bf-07a3-42ae-a5eb-17fff9a7a87a-kube-proxy\") pod \"kube-proxy-6kncx\" (UID: \"2e9168bf-07a3-42ae-a5eb-17fff9a7a87a\") " pod="kube-system/kube-proxy-6kncx" Jan 24 00:32:06.876915 systemd[1]: Created slice kubepods-besteffort-pod53392829_68bd_46df_a811_d0872fb8e1b9.slice - libcontainer container kubepods-besteffort-pod53392829_68bd_46df_a811_d0872fb8e1b9.slice. Jan 24 00:32:06.923329 kubelet[3194]: I0124 00:32:06.923284 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pctm\" (UniqueName: \"kubernetes.io/projected/53392829-68bd-46df-a811-d0872fb8e1b9-kube-api-access-5pctm\") pod \"tigera-operator-7dcd859c48-jm6t2\" (UID: \"53392829-68bd-46df-a811-d0872fb8e1b9\") " pod="tigera-operator/tigera-operator-7dcd859c48-jm6t2" Jan 24 00:32:06.923689 kubelet[3194]: I0124 00:32:06.923555 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/53392829-68bd-46df-a811-d0872fb8e1b9-var-lib-calico\") pod \"tigera-operator-7dcd859c48-jm6t2\" (UID: \"53392829-68bd-46df-a811-d0872fb8e1b9\") " pod="tigera-operator/tigera-operator-7dcd859c48-jm6t2" Jan 24 00:32:07.068113 containerd[1990]: time="2026-01-24T00:32:07.067981868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6kncx,Uid:2e9168bf-07a3-42ae-a5eb-17fff9a7a87a,Namespace:kube-system,Attempt:0,}" Jan 24 00:32:07.104158 containerd[1990]: time="2026-01-24T00:32:07.103604077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:32:07.104158 containerd[1990]: time="2026-01-24T00:32:07.103697144Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:32:07.104158 containerd[1990]: time="2026-01-24T00:32:07.103713132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:32:07.104158 containerd[1990]: time="2026-01-24T00:32:07.103911055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:32:07.136429 systemd[1]: Started cri-containerd-1d62e9c25b99e9e552a0c918b2401cc4543b0bd9efe1b7c7a3ac2871c086e6e7.scope - libcontainer container 1d62e9c25b99e9e552a0c918b2401cc4543b0bd9efe1b7c7a3ac2871c086e6e7. Jan 24 00:32:07.162460 containerd[1990]: time="2026-01-24T00:32:07.162309471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6kncx,Uid:2e9168bf-07a3-42ae-a5eb-17fff9a7a87a,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d62e9c25b99e9e552a0c918b2401cc4543b0bd9efe1b7c7a3ac2871c086e6e7\"" Jan 24 00:32:07.171591 containerd[1990]: time="2026-01-24T00:32:07.171515406Z" level=info msg="CreateContainer within sandbox \"1d62e9c25b99e9e552a0c918b2401cc4543b0bd9efe1b7c7a3ac2871c086e6e7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:32:07.182437 containerd[1990]: time="2026-01-24T00:32:07.182123838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-jm6t2,Uid:53392829-68bd-46df-a811-d0872fb8e1b9,Namespace:tigera-operator,Attempt:0,}" Jan 24 00:32:07.219394 containerd[1990]: time="2026-01-24T00:32:07.219274542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:32:07.219394 containerd[1990]: time="2026-01-24T00:32:07.219342254Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:32:07.219394 containerd[1990]: time="2026-01-24T00:32:07.219359462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:32:07.220348 containerd[1990]: time="2026-01-24T00:32:07.220232301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:32:07.233537 containerd[1990]: time="2026-01-24T00:32:07.233501227Z" level=info msg="CreateContainer within sandbox \"1d62e9c25b99e9e552a0c918b2401cc4543b0bd9efe1b7c7a3ac2871c086e6e7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"661fb33258915f9d0be749dc50c2982f108459293c8490d0ac8e8d1cf37115b4\"" Jan 24 00:32:07.236350 containerd[1990]: time="2026-01-24T00:32:07.235342284Z" level=info msg="StartContainer for \"661fb33258915f9d0be749dc50c2982f108459293c8490d0ac8e8d1cf37115b4\"" Jan 24 00:32:07.239359 systemd[1]: Started cri-containerd-0e4fbb42b28bbabcb90659f4dfa445aa3ed10dbb3cbd1fcba670080e43788f14.scope - libcontainer container 0e4fbb42b28bbabcb90659f4dfa445aa3ed10dbb3cbd1fcba670080e43788f14. Jan 24 00:32:07.268125 systemd[1]: Started cri-containerd-661fb33258915f9d0be749dc50c2982f108459293c8490d0ac8e8d1cf37115b4.scope - libcontainer container 661fb33258915f9d0be749dc50c2982f108459293c8490d0ac8e8d1cf37115b4. Jan 24 00:32:07.311574 containerd[1990]: time="2026-01-24T00:32:07.311503727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-jm6t2,Uid:53392829-68bd-46df-a811-d0872fb8e1b9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0e4fbb42b28bbabcb90659f4dfa445aa3ed10dbb3cbd1fcba670080e43788f14\"" Jan 24 00:32:07.320602 containerd[1990]: time="2026-01-24T00:32:07.320480890Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 24 00:32:07.333056 containerd[1990]: time="2026-01-24T00:32:07.332950922Z" level=info msg="StartContainer for \"661fb33258915f9d0be749dc50c2982f108459293c8490d0ac8e8d1cf37115b4\" returns successfully" Jan 24 00:32:08.547637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1774049979.mount: Deactivated successfully. Jan 24 00:32:08.830761 kubelet[3194]: I0124 00:32:08.830336 3194 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6kncx" podStartSLOduration=2.830313068 podStartE2EDuration="2.830313068s" podCreationTimestamp="2026-01-24 00:32:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:32:07.666400154 +0000 UTC m=+8.220356070" watchObservedRunningTime="2026-01-24 00:32:08.830313068 +0000 UTC m=+9.384268984" Jan 24 00:32:09.353735 containerd[1990]: time="2026-01-24T00:32:09.353671554Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:09.355723 containerd[1990]: time="2026-01-24T00:32:09.355523430Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 24 00:32:09.357835 containerd[1990]: time="2026-01-24T00:32:09.357805649Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:09.361961 containerd[1990]: time="2026-01-24T00:32:09.361199707Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:09.361961 containerd[1990]: time="2026-01-24T00:32:09.361846956Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.04132692s" Jan 24 00:32:09.361961 containerd[1990]: time="2026-01-24T00:32:09.361874936Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 24 00:32:09.367670 containerd[1990]: time="2026-01-24T00:32:09.367617387Z" level=info msg="CreateContainer within sandbox \"0e4fbb42b28bbabcb90659f4dfa445aa3ed10dbb3cbd1fcba670080e43788f14\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 24 00:32:09.418481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2339882670.mount: Deactivated successfully. Jan 24 00:32:09.444422 containerd[1990]: time="2026-01-24T00:32:09.444175446Z" level=info msg="CreateContainer within sandbox \"0e4fbb42b28bbabcb90659f4dfa445aa3ed10dbb3cbd1fcba670080e43788f14\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f25191915d5145d0a6ae4c3b8ba6dc8fde0e300f8602821097df217948c19efa\"" Jan 24 00:32:09.448231 containerd[1990]: time="2026-01-24T00:32:09.447040546Z" level=info msg="StartContainer for \"f25191915d5145d0a6ae4c3b8ba6dc8fde0e300f8602821097df217948c19efa\"" Jan 24 00:32:09.490370 systemd[1]: Started cri-containerd-f25191915d5145d0a6ae4c3b8ba6dc8fde0e300f8602821097df217948c19efa.scope - libcontainer container f25191915d5145d0a6ae4c3b8ba6dc8fde0e300f8602821097df217948c19efa. Jan 24 00:32:09.556800 containerd[1990]: time="2026-01-24T00:32:09.556752858Z" level=info msg="StartContainer for \"f25191915d5145d0a6ae4c3b8ba6dc8fde0e300f8602821097df217948c19efa\" returns successfully" Jan 24 00:32:11.345794 kubelet[3194]: I0124 00:32:11.345734 3194 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-jm6t2" podStartSLOduration=3.298529412 podStartE2EDuration="5.345716134s" podCreationTimestamp="2026-01-24 00:32:06 +0000 UTC" firstStartedPulling="2026-01-24 00:32:07.315800956 +0000 UTC m=+7.869756863" lastFinishedPulling="2026-01-24 00:32:09.362987691 +0000 UTC m=+9.916943585" observedRunningTime="2026-01-24 00:32:09.671616029 +0000 UTC m=+10.225571944" watchObservedRunningTime="2026-01-24 00:32:11.345716134 +0000 UTC m=+11.899672049" Jan 24 00:32:46.338272 sudo[2301]: pam_unix(sudo:session): session closed for user root Jan 24 00:32:46.419813 sshd[2298]: pam_unix(sshd:session): session closed for user core Jan 24 00:32:46.426601 systemd[1]: sshd@6-172.31.16.136:22-4.153.228.146:46582.service: Deactivated successfully. Jan 24 00:32:46.431935 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:32:46.433073 systemd[1]: session-7.scope: Consumed 6.732s CPU time, 144.5M memory peak, 0B memory swap peak. Jan 24 00:32:46.437558 systemd-logind[1960]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:32:46.440853 systemd-logind[1960]: Removed session 7. Jan 24 00:32:53.040216 systemd[1]: Created slice kubepods-besteffort-pod030e8cc2_da98_4044_b46d_b139705de1ee.slice - libcontainer container kubepods-besteffort-pod030e8cc2_da98_4044_b46d_b139705de1ee.slice. Jan 24 00:32:53.120791 kubelet[3194]: I0124 00:32:53.120735 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hvvb\" (UniqueName: \"kubernetes.io/projected/030e8cc2-da98-4044-b46d-b139705de1ee-kube-api-access-8hvvb\") pod \"calico-typha-5fb494c8d6-4bpfv\" (UID: \"030e8cc2-da98-4044-b46d-b139705de1ee\") " pod="calico-system/calico-typha-5fb494c8d6-4bpfv" Jan 24 00:32:53.120791 kubelet[3194]: I0124 00:32:53.120785 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/030e8cc2-da98-4044-b46d-b139705de1ee-tigera-ca-bundle\") pod \"calico-typha-5fb494c8d6-4bpfv\" (UID: \"030e8cc2-da98-4044-b46d-b139705de1ee\") " pod="calico-system/calico-typha-5fb494c8d6-4bpfv" Jan 24 00:32:53.121392 kubelet[3194]: I0124 00:32:53.120806 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/030e8cc2-da98-4044-b46d-b139705de1ee-typha-certs\") pod \"calico-typha-5fb494c8d6-4bpfv\" (UID: \"030e8cc2-da98-4044-b46d-b139705de1ee\") " pod="calico-system/calico-typha-5fb494c8d6-4bpfv" Jan 24 00:32:53.248003 systemd[1]: Created slice kubepods-besteffort-pod960954a7_9d37_422a_92a0_e5118c232e3e.slice - libcontainer container kubepods-besteffort-pod960954a7_9d37_422a_92a0_e5118c232e3e.slice. Jan 24 00:32:53.323367 kubelet[3194]: I0124 00:32:53.322953 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/960954a7-9d37-422a-92a0-e5118c232e3e-cni-log-dir\") pod \"calico-node-t45xx\" (UID: \"960954a7-9d37-422a-92a0-e5118c232e3e\") " pod="calico-system/calico-node-t45xx" Jan 24 00:32:53.323367 kubelet[3194]: I0124 00:32:53.322991 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/960954a7-9d37-422a-92a0-e5118c232e3e-var-lib-calico\") pod \"calico-node-t45xx\" (UID: \"960954a7-9d37-422a-92a0-e5118c232e3e\") " pod="calico-system/calico-node-t45xx" Jan 24 00:32:53.323367 kubelet[3194]: I0124 00:32:53.323009 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/960954a7-9d37-422a-92a0-e5118c232e3e-cni-net-dir\") pod \"calico-node-t45xx\" (UID: \"960954a7-9d37-422a-92a0-e5118c232e3e\") " pod="calico-system/calico-node-t45xx" Jan 24 00:32:53.323367 kubelet[3194]: I0124 00:32:53.323027 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/960954a7-9d37-422a-92a0-e5118c232e3e-node-certs\") pod \"calico-node-t45xx\" (UID: \"960954a7-9d37-422a-92a0-e5118c232e3e\") " pod="calico-system/calico-node-t45xx" Jan 24 00:32:53.323367 kubelet[3194]: I0124 00:32:53.323044 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/960954a7-9d37-422a-92a0-e5118c232e3e-xtables-lock\") pod \"calico-node-t45xx\" (UID: \"960954a7-9d37-422a-92a0-e5118c232e3e\") " pod="calico-system/calico-node-t45xx" Jan 24 00:32:53.323589 kubelet[3194]: I0124 00:32:53.323086 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/960954a7-9d37-422a-92a0-e5118c232e3e-cni-bin-dir\") pod \"calico-node-t45xx\" (UID: \"960954a7-9d37-422a-92a0-e5118c232e3e\") " pod="calico-system/calico-node-t45xx" Jan 24 00:32:53.323589 kubelet[3194]: I0124 00:32:53.323107 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/960954a7-9d37-422a-92a0-e5118c232e3e-lib-modules\") pod \"calico-node-t45xx\" (UID: \"960954a7-9d37-422a-92a0-e5118c232e3e\") " pod="calico-system/calico-node-t45xx" Jan 24 00:32:53.323589 kubelet[3194]: I0124 00:32:53.323123 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/960954a7-9d37-422a-92a0-e5118c232e3e-var-run-calico\") pod \"calico-node-t45xx\" (UID: \"960954a7-9d37-422a-92a0-e5118c232e3e\") " pod="calico-system/calico-node-t45xx" Jan 24 00:32:53.323589 kubelet[3194]: I0124 00:32:53.323160 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfd4k\" (UniqueName: \"kubernetes.io/projected/960954a7-9d37-422a-92a0-e5118c232e3e-kube-api-access-sfd4k\") pod \"calico-node-t45xx\" (UID: \"960954a7-9d37-422a-92a0-e5118c232e3e\") " pod="calico-system/calico-node-t45xx" Jan 24 00:32:53.323589 kubelet[3194]: I0124 00:32:53.323185 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/960954a7-9d37-422a-92a0-e5118c232e3e-flexvol-driver-host\") pod \"calico-node-t45xx\" (UID: \"960954a7-9d37-422a-92a0-e5118c232e3e\") " pod="calico-system/calico-node-t45xx" Jan 24 00:32:53.323717 kubelet[3194]: I0124 00:32:53.323199 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/960954a7-9d37-422a-92a0-e5118c232e3e-policysync\") pod \"calico-node-t45xx\" (UID: \"960954a7-9d37-422a-92a0-e5118c232e3e\") " pod="calico-system/calico-node-t45xx" Jan 24 00:32:53.323717 kubelet[3194]: I0124 00:32:53.323213 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/960954a7-9d37-422a-92a0-e5118c232e3e-tigera-ca-bundle\") pod \"calico-node-t45xx\" (UID: \"960954a7-9d37-422a-92a0-e5118c232e3e\") " pod="calico-system/calico-node-t45xx" Jan 24 00:32:53.352618 containerd[1990]: time="2026-01-24T00:32:53.352568424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fb494c8d6-4bpfv,Uid:030e8cc2-da98-4044-b46d-b139705de1ee,Namespace:calico-system,Attempt:0,}" Jan 24 00:32:53.402046 containerd[1990]: time="2026-01-24T00:32:53.398634404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:32:53.402046 containerd[1990]: time="2026-01-24T00:32:53.400307579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:32:53.402046 containerd[1990]: time="2026-01-24T00:32:53.400323028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:32:53.402046 containerd[1990]: time="2026-01-24T00:32:53.400418558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:32:53.437088 kubelet[3194]: E0124 00:32:53.436964 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.437088 kubelet[3194]: W0124 00:32:53.436988 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.437088 kubelet[3194]: E0124 00:32:53.437021 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.462305 kubelet[3194]: E0124 00:32:53.462276 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.462305 kubelet[3194]: W0124 00:32:53.462300 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.462636 kubelet[3194]: E0124 00:32:53.462319 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.478333 systemd[1]: Started cri-containerd-7ff57c062a94f8d66a0240d92598649efff23406773beb24a3115a255ccfeef1.scope - libcontainer container 7ff57c062a94f8d66a0240d92598649efff23406773beb24a3115a255ccfeef1. Jan 24 00:32:53.485763 kubelet[3194]: E0124 00:32:53.485507 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgckl" podUID="6dea86f8-2783-4942-8476-4f769af7b22d" Jan 24 00:32:53.500236 kubelet[3194]: E0124 00:32:53.500201 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.500236 kubelet[3194]: W0124 00:32:53.500233 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.500454 kubelet[3194]: E0124 00:32:53.500258 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.501482 kubelet[3194]: E0124 00:32:53.501441 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.501482 kubelet[3194]: W0124 00:32:53.501468 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.501674 kubelet[3194]: E0124 00:32:53.501490 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.501760 kubelet[3194]: E0124 00:32:53.501721 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.501760 kubelet[3194]: W0124 00:32:53.501731 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.501760 kubelet[3194]: E0124 00:32:53.501748 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.515058 kubelet[3194]: E0124 00:32:53.515021 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.515058 kubelet[3194]: W0124 00:32:53.515055 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.515294 kubelet[3194]: E0124 00:32:53.515078 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.515392 kubelet[3194]: E0124 00:32:53.515375 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.515465 kubelet[3194]: W0124 00:32:53.515392 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.515465 kubelet[3194]: E0124 00:32:53.515407 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.515781 kubelet[3194]: E0124 00:32:53.515759 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.515781 kubelet[3194]: W0124 00:32:53.515780 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.515910 kubelet[3194]: E0124 00:32:53.515794 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.516130 kubelet[3194]: E0124 00:32:53.516101 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.516130 kubelet[3194]: W0124 00:32:53.516119 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.516271 kubelet[3194]: E0124 00:32:53.516132 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.516874 kubelet[3194]: E0124 00:32:53.516856 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.516874 kubelet[3194]: W0124 00:32:53.516874 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.517005 kubelet[3194]: E0124 00:32:53.516888 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.518153 kubelet[3194]: E0124 00:32:53.518124 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.518242 kubelet[3194]: W0124 00:32:53.518141 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.518242 kubelet[3194]: E0124 00:32:53.518179 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.518433 kubelet[3194]: E0124 00:32:53.518415 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.518433 kubelet[3194]: W0124 00:32:53.518433 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.518573 kubelet[3194]: E0124 00:32:53.518448 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.519210 kubelet[3194]: E0124 00:32:53.519193 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.519210 kubelet[3194]: W0124 00:32:53.519210 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.519345 kubelet[3194]: E0124 00:32:53.519224 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.519943 kubelet[3194]: E0124 00:32:53.519925 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.519943 kubelet[3194]: W0124 00:32:53.519943 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.520083 kubelet[3194]: E0124 00:32:53.519956 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.521021 kubelet[3194]: E0124 00:32:53.521003 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.521021 kubelet[3194]: W0124 00:32:53.521021 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.521205 kubelet[3194]: E0124 00:32:53.521034 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.522712 kubelet[3194]: E0124 00:32:53.522683 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.522712 kubelet[3194]: W0124 00:32:53.522702 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.522712 kubelet[3194]: E0124 00:32:53.522717 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.523158 kubelet[3194]: E0124 00:32:53.522943 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.523158 kubelet[3194]: W0124 00:32:53.522955 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.523158 kubelet[3194]: E0124 00:32:53.522967 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.523501 kubelet[3194]: E0124 00:32:53.523191 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.523501 kubelet[3194]: W0124 00:32:53.523200 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.523501 kubelet[3194]: E0124 00:32:53.523212 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.523859 kubelet[3194]: E0124 00:32:53.523709 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.523859 kubelet[3194]: W0124 00:32:53.523724 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.523859 kubelet[3194]: E0124 00:32:53.523738 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.524733 kubelet[3194]: E0124 00:32:53.524698 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.524733 kubelet[3194]: W0124 00:32:53.524716 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.524733 kubelet[3194]: E0124 00:32:53.524730 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.526393 kubelet[3194]: E0124 00:32:53.526373 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.526393 kubelet[3194]: W0124 00:32:53.526392 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.526524 kubelet[3194]: E0124 00:32:53.526406 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.526655 kubelet[3194]: E0124 00:32:53.526638 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.526704 kubelet[3194]: W0124 00:32:53.526655 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.526704 kubelet[3194]: E0124 00:32:53.526670 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.527882 kubelet[3194]: E0124 00:32:53.527046 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.527882 kubelet[3194]: W0124 00:32:53.527060 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.527882 kubelet[3194]: E0124 00:32:53.527074 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.527882 kubelet[3194]: I0124 00:32:53.527106 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6dea86f8-2783-4942-8476-4f769af7b22d-kubelet-dir\") pod \"csi-node-driver-zgckl\" (UID: \"6dea86f8-2783-4942-8476-4f769af7b22d\") " pod="calico-system/csi-node-driver-zgckl" Jan 24 00:32:53.527882 kubelet[3194]: E0124 00:32:53.527442 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.527882 kubelet[3194]: W0124 00:32:53.527480 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.527882 kubelet[3194]: E0124 00:32:53.527494 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.528322 kubelet[3194]: I0124 00:32:53.528295 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6dea86f8-2783-4942-8476-4f769af7b22d-registration-dir\") pod \"csi-node-driver-zgckl\" (UID: \"6dea86f8-2783-4942-8476-4f769af7b22d\") " pod="calico-system/csi-node-driver-zgckl" Jan 24 00:32:53.528606 kubelet[3194]: E0124 00:32:53.528586 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.528676 kubelet[3194]: W0124 00:32:53.528607 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.528676 kubelet[3194]: E0124 00:32:53.528621 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.528767 kubelet[3194]: I0124 00:32:53.528731 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6dea86f8-2783-4942-8476-4f769af7b22d-socket-dir\") pod \"csi-node-driver-zgckl\" (UID: \"6dea86f8-2783-4942-8476-4f769af7b22d\") " pod="calico-system/csi-node-driver-zgckl" Jan 24 00:32:53.528910 kubelet[3194]: E0124 00:32:53.528892 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.528910 kubelet[3194]: W0124 00:32:53.528909 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.529091 kubelet[3194]: E0124 00:32:53.528922 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.529745 kubelet[3194]: E0124 00:32:53.529211 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.529745 kubelet[3194]: W0124 00:32:53.529225 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.529745 kubelet[3194]: E0124 00:32:53.529237 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.530241 kubelet[3194]: E0124 00:32:53.530222 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.530241 kubelet[3194]: W0124 00:32:53.530240 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.530356 kubelet[3194]: E0124 00:32:53.530257 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.530356 kubelet[3194]: I0124 00:32:53.530299 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6dea86f8-2783-4942-8476-4f769af7b22d-varrun\") pod \"csi-node-driver-zgckl\" (UID: \"6dea86f8-2783-4942-8476-4f769af7b22d\") " pod="calico-system/csi-node-driver-zgckl" Jan 24 00:32:53.532725 kubelet[3194]: E0124 00:32:53.532703 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.532806 kubelet[3194]: W0124 00:32:53.532725 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.532806 kubelet[3194]: E0124 00:32:53.532740 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.533084 kubelet[3194]: E0124 00:32:53.532957 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.533084 kubelet[3194]: W0124 00:32:53.532969 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.533084 kubelet[3194]: E0124 00:32:53.532981 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.535156 kubelet[3194]: E0124 00:32:53.534216 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.535156 kubelet[3194]: W0124 00:32:53.534233 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.535156 kubelet[3194]: E0124 00:32:53.534247 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.535156 kubelet[3194]: I0124 00:32:53.534280 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bzv8\" (UniqueName: \"kubernetes.io/projected/6dea86f8-2783-4942-8476-4f769af7b22d-kube-api-access-8bzv8\") pod \"csi-node-driver-zgckl\" (UID: \"6dea86f8-2783-4942-8476-4f769af7b22d\") " pod="calico-system/csi-node-driver-zgckl" Jan 24 00:32:53.536221 kubelet[3194]: E0124 00:32:53.536201 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.536221 kubelet[3194]: W0124 00:32:53.536221 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.536352 kubelet[3194]: E0124 00:32:53.536237 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.537213 kubelet[3194]: E0124 00:32:53.537194 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.537290 kubelet[3194]: W0124 00:32:53.537213 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.537290 kubelet[3194]: E0124 00:32:53.537229 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.538649 kubelet[3194]: E0124 00:32:53.538621 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.538649 kubelet[3194]: W0124 00:32:53.538639 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.538791 kubelet[3194]: E0124 00:32:53.538675 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.540546 kubelet[3194]: E0124 00:32:53.540269 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.540546 kubelet[3194]: W0124 00:32:53.540284 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.540546 kubelet[3194]: E0124 00:32:53.540298 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.540546 kubelet[3194]: E0124 00:32:53.540547 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.540810 kubelet[3194]: W0124 00:32:53.540560 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.540810 kubelet[3194]: E0124 00:32:53.540573 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.541120 kubelet[3194]: E0124 00:32:53.541106 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.541316 kubelet[3194]: W0124 00:32:53.541120 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.541316 kubelet[3194]: E0124 00:32:53.541134 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.570779 containerd[1990]: time="2026-01-24T00:32:53.570284964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t45xx,Uid:960954a7-9d37-422a-92a0-e5118c232e3e,Namespace:calico-system,Attempt:0,}" Jan 24 00:32:53.614829 containerd[1990]: time="2026-01-24T00:32:53.614629487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fb494c8d6-4bpfv,Uid:030e8cc2-da98-4044-b46d-b139705de1ee,Namespace:calico-system,Attempt:0,} returns sandbox id \"7ff57c062a94f8d66a0240d92598649efff23406773beb24a3115a255ccfeef1\"" Jan 24 00:32:53.620157 containerd[1990]: time="2026-01-24T00:32:53.619924757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 24 00:32:53.628713 containerd[1990]: time="2026-01-24T00:32:53.628619378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:32:53.628713 containerd[1990]: time="2026-01-24T00:32:53.628689368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:32:53.628964 containerd[1990]: time="2026-01-24T00:32:53.628888553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:32:53.629397 containerd[1990]: time="2026-01-24T00:32:53.629340157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:32:53.641963 kubelet[3194]: E0124 00:32:53.641931 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.641963 kubelet[3194]: W0124 00:32:53.641953 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.641963 kubelet[3194]: E0124 00:32:53.641976 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.642301 kubelet[3194]: E0124 00:32:53.642279 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.642301 kubelet[3194]: W0124 00:32:53.642294 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.642466 kubelet[3194]: E0124 00:32:53.642310 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.643246 kubelet[3194]: E0124 00:32:53.643221 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.643246 kubelet[3194]: W0124 00:32:53.643240 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.643246 kubelet[3194]: E0124 00:32:53.643255 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.643576 kubelet[3194]: E0124 00:32:53.643561 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.643576 kubelet[3194]: W0124 00:32:53.643574 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.643721 kubelet[3194]: E0124 00:32:53.643587 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.645473 kubelet[3194]: E0124 00:32:53.644428 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.645473 kubelet[3194]: W0124 00:32:53.644443 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.645473 kubelet[3194]: E0124 00:32:53.644457 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.645473 kubelet[3194]: E0124 00:32:53.644740 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.645473 kubelet[3194]: W0124 00:32:53.644749 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.645473 kubelet[3194]: E0124 00:32:53.644760 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.645473 kubelet[3194]: E0124 00:32:53.644945 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.645473 kubelet[3194]: W0124 00:32:53.644954 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.645473 kubelet[3194]: E0124 00:32:53.644964 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.645473 kubelet[3194]: E0124 00:32:53.645176 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.649380 kubelet[3194]: W0124 00:32:53.645186 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.649380 kubelet[3194]: E0124 00:32:53.645198 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.649380 kubelet[3194]: E0124 00:32:53.645578 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.649380 kubelet[3194]: W0124 00:32:53.645589 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.649380 kubelet[3194]: E0124 00:32:53.645603 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.649380 kubelet[3194]: E0124 00:32:53.647814 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.649380 kubelet[3194]: W0124 00:32:53.647828 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.649380 kubelet[3194]: E0124 00:32:53.647846 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.649380 kubelet[3194]: E0124 00:32:53.648090 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.649380 kubelet[3194]: W0124 00:32:53.648102 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.650058 kubelet[3194]: E0124 00:32:53.648115 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.650058 kubelet[3194]: E0124 00:32:53.648394 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.650058 kubelet[3194]: W0124 00:32:53.648405 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.650058 kubelet[3194]: E0124 00:32:53.648418 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.654590 kubelet[3194]: E0124 00:32:53.654567 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.656413 kubelet[3194]: W0124 00:32:53.656173 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.656413 kubelet[3194]: E0124 00:32:53.656211 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.657224 kubelet[3194]: E0124 00:32:53.656735 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.657224 kubelet[3194]: W0124 00:32:53.656754 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.657224 kubelet[3194]: E0124 00:32:53.656774 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.657224 kubelet[3194]: E0124 00:32:53.657083 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.657224 kubelet[3194]: W0124 00:32:53.657094 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.657224 kubelet[3194]: E0124 00:32:53.657107 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.659288 kubelet[3194]: E0124 00:32:53.659252 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.659369 kubelet[3194]: W0124 00:32:53.659304 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.659369 kubelet[3194]: E0124 00:32:53.659325 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.660617 kubelet[3194]: E0124 00:32:53.659645 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.660617 kubelet[3194]: W0124 00:32:53.659658 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.660617 kubelet[3194]: E0124 00:32:53.659672 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.660772 kubelet[3194]: E0124 00:32:53.660651 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.660772 kubelet[3194]: W0124 00:32:53.660663 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.660772 kubelet[3194]: E0124 00:32:53.660677 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.662201 kubelet[3194]: E0124 00:32:53.661694 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.662201 kubelet[3194]: W0124 00:32:53.661708 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.662201 kubelet[3194]: E0124 00:32:53.661721 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.662201 kubelet[3194]: E0124 00:32:53.661936 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.662201 kubelet[3194]: W0124 00:32:53.661945 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.662201 kubelet[3194]: E0124 00:32:53.661957 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.662201 kubelet[3194]: E0124 00:32:53.662154 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.662201 kubelet[3194]: W0124 00:32:53.662163 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.662201 kubelet[3194]: E0124 00:32:53.662174 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.662653 kubelet[3194]: E0124 00:32:53.662446 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.662653 kubelet[3194]: W0124 00:32:53.662456 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.662653 kubelet[3194]: E0124 00:32:53.662469 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.662801 kubelet[3194]: E0124 00:32:53.662688 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.662801 kubelet[3194]: W0124 00:32:53.662697 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.662801 kubelet[3194]: E0124 00:32:53.662709 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.662935 kubelet[3194]: E0124 00:32:53.662894 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.662935 kubelet[3194]: W0124 00:32:53.662903 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.662935 kubelet[3194]: E0124 00:32:53.662913 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.663185 kubelet[3194]: E0124 00:32:53.663132 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.663185 kubelet[3194]: W0124 00:32:53.663185 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.663297 kubelet[3194]: E0124 00:32:53.663198 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.665069 systemd[1]: Started cri-containerd-64d261ff465df55eea2efd24a485120b2e7168fe87ad3002d40c33e70bdd1c09.scope - libcontainer container 64d261ff465df55eea2efd24a485120b2e7168fe87ad3002d40c33e70bdd1c09. Jan 24 00:32:53.670844 kubelet[3194]: E0124 00:32:53.670821 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:53.671827 kubelet[3194]: W0124 00:32:53.670962 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:53.671827 kubelet[3194]: E0124 00:32:53.670986 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:53.697826 containerd[1990]: time="2026-01-24T00:32:53.697791221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t45xx,Uid:960954a7-9d37-422a-92a0-e5118c232e3e,Namespace:calico-system,Attempt:0,} returns sandbox id \"64d261ff465df55eea2efd24a485120b2e7168fe87ad3002d40c33e70bdd1c09\"" Jan 24 00:32:54.603335 kubelet[3194]: E0124 00:32:54.603119 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgckl" podUID="6dea86f8-2783-4942-8476-4f769af7b22d" Jan 24 00:32:55.021488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount627007574.mount: Deactivated successfully. Jan 24 00:32:56.083669 containerd[1990]: time="2026-01-24T00:32:56.083616140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:56.084856 containerd[1990]: time="2026-01-24T00:32:56.084675578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 24 00:32:56.087091 containerd[1990]: time="2026-01-24T00:32:56.086011712Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:56.088164 containerd[1990]: time="2026-01-24T00:32:56.088098645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:56.088793 containerd[1990]: time="2026-01-24T00:32:56.088680340Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.468704645s" Jan 24 00:32:56.088793 containerd[1990]: time="2026-01-24T00:32:56.088710770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 24 00:32:56.089755 containerd[1990]: time="2026-01-24T00:32:56.089664173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 24 00:32:56.106350 containerd[1990]: time="2026-01-24T00:32:56.106309998Z" level=info msg="CreateContainer within sandbox \"7ff57c062a94f8d66a0240d92598649efff23406773beb24a3115a255ccfeef1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 24 00:32:56.124447 containerd[1990]: time="2026-01-24T00:32:56.124398382Z" level=info msg="CreateContainer within sandbox \"7ff57c062a94f8d66a0240d92598649efff23406773beb24a3115a255ccfeef1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"80eb770d077401685021030d036c22f42ab51a151504d590f28a3ad7ced5fb57\"" Jan 24 00:32:56.125398 containerd[1990]: time="2026-01-24T00:32:56.125354671Z" level=info msg="StartContainer for \"80eb770d077401685021030d036c22f42ab51a151504d590f28a3ad7ced5fb57\"" Jan 24 00:32:56.181511 systemd[1]: Started cri-containerd-80eb770d077401685021030d036c22f42ab51a151504d590f28a3ad7ced5fb57.scope - libcontainer container 80eb770d077401685021030d036c22f42ab51a151504d590f28a3ad7ced5fb57. Jan 24 00:32:56.238359 containerd[1990]: time="2026-01-24T00:32:56.236419579Z" level=info msg="StartContainer for \"80eb770d077401685021030d036c22f42ab51a151504d590f28a3ad7ced5fb57\" returns successfully" Jan 24 00:32:56.603369 kubelet[3194]: E0124 00:32:56.603307 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgckl" podUID="6dea86f8-2783-4942-8476-4f769af7b22d" Jan 24 00:32:56.800914 kubelet[3194]: I0124 00:32:56.798978 3194 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5fb494c8d6-4bpfv" podStartSLOduration=2.326886064 podStartE2EDuration="4.798085355s" podCreationTimestamp="2026-01-24 00:32:52 +0000 UTC" firstStartedPulling="2026-01-24 00:32:53.618355281 +0000 UTC m=+54.172311177" lastFinishedPulling="2026-01-24 00:32:56.089554575 +0000 UTC m=+56.643510468" observedRunningTime="2026-01-24 00:32:56.797674479 +0000 UTC m=+57.351630394" watchObservedRunningTime="2026-01-24 00:32:56.798085355 +0000 UTC m=+57.352041270" Jan 24 00:32:56.851436 kubelet[3194]: E0124 00:32:56.851400 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.851436 kubelet[3194]: W0124 00:32:56.851425 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.851436 kubelet[3194]: E0124 00:32:56.851450 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:56.851696 kubelet[3194]: E0124 00:32:56.851683 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.851696 kubelet[3194]: W0124 00:32:56.851694 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.851794 kubelet[3194]: E0124 00:32:56.851704 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:56.851873 kubelet[3194]: E0124 00:32:56.851860 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.851873 kubelet[3194]: W0124 00:32:56.851869 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.851952 kubelet[3194]: E0124 00:32:56.851877 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:56.852226 kubelet[3194]: E0124 00:32:56.852114 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.852226 kubelet[3194]: W0124 00:32:56.852133 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.852226 kubelet[3194]: E0124 00:32:56.852170 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:56.852433 kubelet[3194]: E0124 00:32:56.852411 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.852433 kubelet[3194]: W0124 00:32:56.852419 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.852433 kubelet[3194]: E0124 00:32:56.852429 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:56.852610 kubelet[3194]: E0124 00:32:56.852595 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.852610 kubelet[3194]: W0124 00:32:56.852607 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.852696 kubelet[3194]: E0124 00:32:56.852615 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:56.852780 kubelet[3194]: E0124 00:32:56.852767 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.852780 kubelet[3194]: W0124 00:32:56.852778 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.852853 kubelet[3194]: E0124 00:32:56.852785 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:56.852955 kubelet[3194]: E0124 00:32:56.852942 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.852955 kubelet[3194]: W0124 00:32:56.852952 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.853045 kubelet[3194]: E0124 00:32:56.852960 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:56.853434 kubelet[3194]: E0124 00:32:56.853362 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.853434 kubelet[3194]: W0124 00:32:56.853377 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.853434 kubelet[3194]: E0124 00:32:56.853388 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:56.854441 kubelet[3194]: E0124 00:32:56.854411 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.854441 kubelet[3194]: W0124 00:32:56.854435 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.854682 kubelet[3194]: E0124 00:32:56.854446 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:56.854682 kubelet[3194]: E0124 00:32:56.854653 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.854682 kubelet[3194]: W0124 00:32:56.854659 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.854682 kubelet[3194]: E0124 00:32:56.854666 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:56.854912 kubelet[3194]: E0124 00:32:56.854899 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.854912 kubelet[3194]: W0124 00:32:56.854908 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.854975 kubelet[3194]: E0124 00:32:56.854920 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:56.855111 kubelet[3194]: E0124 00:32:56.855097 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.855111 kubelet[3194]: W0124 00:32:56.855107 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.855212 kubelet[3194]: E0124 00:32:56.855114 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:56.855377 kubelet[3194]: E0124 00:32:56.855363 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.855377 kubelet[3194]: W0124 00:32:56.855373 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.855528 kubelet[3194]: E0124 00:32:56.855381 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:56.855561 kubelet[3194]: E0124 00:32:56.855547 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.855640 kubelet[3194]: W0124 00:32:56.855563 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.855640 kubelet[3194]: E0124 00:32:56.855573 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:56.872083 kubelet[3194]: E0124 00:32:56.872036 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.872083 kubelet[3194]: W0124 00:32:56.872062 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.872083 kubelet[3194]: E0124 00:32:56.872082 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:56.872373 kubelet[3194]: E0124 00:32:56.872343 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.872373 kubelet[3194]: W0124 00:32:56.872352 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.872373 kubelet[3194]: E0124 00:32:56.872363 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:56.872692 kubelet[3194]: E0124 00:32:56.872672 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.872692 kubelet[3194]: W0124 00:32:56.872689 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.872760 kubelet[3194]: E0124 00:32:56.872704 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:56.872983 kubelet[3194]: E0124 00:32:56.872961 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.872983 kubelet[3194]: W0124 00:32:56.872982 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.873070 kubelet[3194]: E0124 00:32:56.872993 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:56.873386 kubelet[3194]: E0124 00:32:56.873363 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.873386 kubelet[3194]: W0124 00:32:56.873385 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.873517 kubelet[3194]: E0124 00:32:56.873400 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:56.873660 kubelet[3194]: E0124 00:32:56.873639 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.873660 kubelet[3194]: W0124 00:32:56.873653 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.873816 kubelet[3194]: E0124 00:32:56.873664 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:56.873865 kubelet[3194]: E0124 00:32:56.873859 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.873900 kubelet[3194]: W0124 00:32:56.873865 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.873900 kubelet[3194]: E0124 00:32:56.873874 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:56.874085 kubelet[3194]: E0124 00:32:56.874069 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.874085 kubelet[3194]: W0124 00:32:56.874080 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.874156 kubelet[3194]: E0124 00:32:56.874090 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:56.874348 kubelet[3194]: E0124 00:32:56.874329 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.874348 kubelet[3194]: W0124 00:32:56.874343 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.874439 kubelet[3194]: E0124 00:32:56.874354 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:56.874736 kubelet[3194]: E0124 00:32:56.874722 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.874736 kubelet[3194]: W0124 00:32:56.874733 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.874801 kubelet[3194]: E0124 00:32:56.874742 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:56.874988 kubelet[3194]: E0124 00:32:56.874962 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.874988 kubelet[3194]: W0124 00:32:56.874981 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.875076 kubelet[3194]: E0124 00:32:56.874993 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:56.875249 kubelet[3194]: E0124 00:32:56.875232 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.875249 kubelet[3194]: W0124 00:32:56.875245 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.875317 kubelet[3194]: E0124 00:32:56.875253 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:56.875484 kubelet[3194]: E0124 00:32:56.875465 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.875484 kubelet[3194]: W0124 00:32:56.875478 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.875578 kubelet[3194]: E0124 00:32:56.875488 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:56.875878 kubelet[3194]: E0124 00:32:56.875736 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.875878 kubelet[3194]: W0124 00:32:56.875752 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.875878 kubelet[3194]: E0124 00:32:56.875764 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:56.876093 kubelet[3194]: E0124 00:32:56.876075 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.876093 kubelet[3194]: W0124 00:32:56.876087 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.876353 kubelet[3194]: E0124 00:32:56.876098 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:56.876466 kubelet[3194]: E0124 00:32:56.876447 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.876466 kubelet[3194]: W0124 00:32:56.876462 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.876547 kubelet[3194]: E0124 00:32:56.876472 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:56.877026 kubelet[3194]: E0124 00:32:56.877009 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.877026 kubelet[3194]: W0124 00:32:56.877021 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.877102 kubelet[3194]: E0124 00:32:56.877030 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:56.877425 kubelet[3194]: E0124 00:32:56.877405 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:32:56.877425 kubelet[3194]: W0124 00:32:56.877417 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:32:56.877508 kubelet[3194]: E0124 00:32:56.877426 3194 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:32:57.414358 containerd[1990]: time="2026-01-24T00:32:57.414301162Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:57.415509 containerd[1990]: time="2026-01-24T00:32:57.415355231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 24 00:32:57.417746 containerd[1990]: time="2026-01-24T00:32:57.416400293Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:57.418828 containerd[1990]: time="2026-01-24T00:32:57.418703393Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:32:57.419456 containerd[1990]: time="2026-01-24T00:32:57.419423383Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.329733222s" Jan 24 00:32:57.419536 containerd[1990]: time="2026-01-24T00:32:57.419459458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 24 00:32:57.424195 containerd[1990]: time="2026-01-24T00:32:57.424151188Z" level=info msg="CreateContainer within sandbox \"64d261ff465df55eea2efd24a485120b2e7168fe87ad3002d40c33e70bdd1c09\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 24 00:32:57.452917 containerd[1990]: time="2026-01-24T00:32:57.452830654Z" level=info msg="CreateContainer within sandbox \"64d261ff465df55eea2efd24a485120b2e7168fe87ad3002d40c33e70bdd1c09\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ca293dcfba84419a111e6bca3c096403f8bfd21e424762614cc13160afe5ad81\"" Jan 24 00:32:57.454129 containerd[1990]: time="2026-01-24T00:32:57.453535608Z" level=info msg="StartContainer for \"ca293dcfba84419a111e6bca3c096403f8bfd21e424762614cc13160afe5ad81\"" Jan 24 00:32:57.487359 systemd[1]: Started cri-containerd-ca293dcfba84419a111e6bca3c096403f8bfd21e424762614cc13160afe5ad81.scope - libcontainer container ca293dcfba84419a111e6bca3c096403f8bfd21e424762614cc13160afe5ad81. Jan 24 00:32:57.515980 containerd[1990]: time="2026-01-24T00:32:57.515909980Z" level=info msg="StartContainer for \"ca293dcfba84419a111e6bca3c096403f8bfd21e424762614cc13160afe5ad81\" returns successfully" Jan 24 00:32:57.526237 systemd[1]: cri-containerd-ca293dcfba84419a111e6bca3c096403f8bfd21e424762614cc13160afe5ad81.scope: Deactivated successfully. Jan 24 00:32:57.552979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca293dcfba84419a111e6bca3c096403f8bfd21e424762614cc13160afe5ad81-rootfs.mount: Deactivated successfully. Jan 24 00:32:57.590033 containerd[1990]: time="2026-01-24T00:32:57.589922848Z" level=info msg="shim disconnected" id=ca293dcfba84419a111e6bca3c096403f8bfd21e424762614cc13160afe5ad81 namespace=k8s.io Jan 24 00:32:57.590033 containerd[1990]: time="2026-01-24T00:32:57.590023132Z" level=warning msg="cleaning up after shim disconnected" id=ca293dcfba84419a111e6bca3c096403f8bfd21e424762614cc13160afe5ad81 namespace=k8s.io Jan 24 00:32:57.590033 containerd[1990]: time="2026-01-24T00:32:57.590032180Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:32:57.789017 containerd[1990]: time="2026-01-24T00:32:57.788903061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 24 00:32:58.603114 kubelet[3194]: E0124 00:32:58.603030 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgckl" podUID="6dea86f8-2783-4942-8476-4f769af7b22d" Jan 24 00:33:00.603565 kubelet[3194]: E0124 00:33:00.603515 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgckl" podUID="6dea86f8-2783-4942-8476-4f769af7b22d" Jan 24 00:33:02.108374 containerd[1990]: time="2026-01-24T00:33:02.108321314Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:33:02.111244 containerd[1990]: time="2026-01-24T00:33:02.111178394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 24 00:33:02.115597 containerd[1990]: time="2026-01-24T00:33:02.113877593Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:33:02.131096 containerd[1990]: time="2026-01-24T00:33:02.131040431Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:33:02.132467 containerd[1990]: time="2026-01-24T00:33:02.132417752Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.343478424s" Jan 24 00:33:02.132467 containerd[1990]: time="2026-01-24T00:33:02.132459819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 24 00:33:02.156390 containerd[1990]: time="2026-01-24T00:33:02.156343280Z" level=info msg="CreateContainer within sandbox \"64d261ff465df55eea2efd24a485120b2e7168fe87ad3002d40c33e70bdd1c09\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 24 00:33:02.316594 containerd[1990]: time="2026-01-24T00:33:02.316404583Z" level=info msg="CreateContainer within sandbox \"64d261ff465df55eea2efd24a485120b2e7168fe87ad3002d40c33e70bdd1c09\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0e1f0b729e59c21c2744ab01615ad6c4e86b052d4e8975d6642d369a8b8cd804\"" Jan 24 00:33:02.322696 containerd[1990]: time="2026-01-24T00:33:02.321811278Z" level=info msg="StartContainer for \"0e1f0b729e59c21c2744ab01615ad6c4e86b052d4e8975d6642d369a8b8cd804\"" Jan 24 00:33:02.371159 systemd[1]: Started cri-containerd-0e1f0b729e59c21c2744ab01615ad6c4e86b052d4e8975d6642d369a8b8cd804.scope - libcontainer container 0e1f0b729e59c21c2744ab01615ad6c4e86b052d4e8975d6642d369a8b8cd804. Jan 24 00:33:02.497910 containerd[1990]: time="2026-01-24T00:33:02.497855417Z" level=info msg="StartContainer for \"0e1f0b729e59c21c2744ab01615ad6c4e86b052d4e8975d6642d369a8b8cd804\" returns successfully" Jan 24 00:33:02.604183 kubelet[3194]: E0124 00:33:02.603303 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgckl" podUID="6dea86f8-2783-4942-8476-4f769af7b22d" Jan 24 00:33:04.508001 systemd[1]: cri-containerd-0e1f0b729e59c21c2744ab01615ad6c4e86b052d4e8975d6642d369a8b8cd804.scope: Deactivated successfully. Jan 24 00:33:04.568329 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e1f0b729e59c21c2744ab01615ad6c4e86b052d4e8975d6642d369a8b8cd804-rootfs.mount: Deactivated successfully. Jan 24 00:33:04.575583 containerd[1990]: time="2026-01-24T00:33:04.575295005Z" level=info msg="shim disconnected" id=0e1f0b729e59c21c2744ab01615ad6c4e86b052d4e8975d6642d369a8b8cd804 namespace=k8s.io Jan 24 00:33:04.575583 containerd[1990]: time="2026-01-24T00:33:04.575370637Z" level=warning msg="cleaning up after shim disconnected" id=0e1f0b729e59c21c2744ab01615ad6c4e86b052d4e8975d6642d369a8b8cd804 namespace=k8s.io Jan 24 00:33:04.575583 containerd[1990]: time="2026-01-24T00:33:04.575384204Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:33:04.597861 containerd[1990]: time="2026-01-24T00:33:04.597774166Z" level=warning msg="cleanup warnings time=\"2026-01-24T00:33:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 24 00:33:04.604098 kubelet[3194]: I0124 00:33:04.588018 3194 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 24 00:33:04.634689 systemd[1]: Created slice kubepods-besteffort-pod6dea86f8_2783_4942_8476_4f769af7b22d.slice - libcontainer container kubepods-besteffort-pod6dea86f8_2783_4942_8476_4f769af7b22d.slice. Jan 24 00:33:04.664826 containerd[1990]: time="2026-01-24T00:33:04.662904541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zgckl,Uid:6dea86f8-2783-4942-8476-4f769af7b22d,Namespace:calico-system,Attempt:0,}" Jan 24 00:33:04.769025 systemd[1]: Created slice kubepods-burstable-pod8d604c20_234e_4790_af46_e3ccb6ebbab2.slice - libcontainer container kubepods-burstable-pod8d604c20_234e_4790_af46_e3ccb6ebbab2.slice. Jan 24 00:33:04.801058 systemd[1]: Created slice kubepods-burstable-pode6c717e9_efbf_49cc_b817_198682317a0f.slice - libcontainer container kubepods-burstable-pode6c717e9_efbf_49cc_b817_198682317a0f.slice. Jan 24 00:33:04.815093 systemd[1]: Created slice kubepods-besteffort-pode0ff43df_473e_4ec1_a924_95f6d305484f.slice - libcontainer container kubepods-besteffort-pode0ff43df_473e_4ec1_a924_95f6d305484f.slice. Jan 24 00:33:04.847316 systemd[1]: Created slice kubepods-besteffort-pod8bd1b1e2_6c2f_496d_84df_3687f4a4a992.slice - libcontainer container kubepods-besteffort-pod8bd1b1e2_6c2f_496d_84df_3687f4a4a992.slice. Jan 24 00:33:04.850403 containerd[1990]: time="2026-01-24T00:33:04.848418345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 24 00:33:04.879607 systemd[1]: Created slice kubepods-besteffort-pod44356a3b_6e7e_4852_a5bd_fffe6e033ca3.slice - libcontainer container kubepods-besteffort-pod44356a3b_6e7e_4852_a5bd_fffe6e033ca3.slice. Jan 24 00:33:04.900225 kubelet[3194]: I0124 00:33:04.900018 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c7963cb-5f76-453d-b9ca-f28ed3f17ce0-tigera-ca-bundle\") pod \"calico-kube-controllers-777f8fb74-d8qgp\" (UID: \"0c7963cb-5f76-453d-b9ca-f28ed3f17ce0\") " pod="calico-system/calico-kube-controllers-777f8fb74-d8qgp" Jan 24 00:33:04.904555 kubelet[3194]: I0124 00:33:04.902994 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzjvr\" (UniqueName: \"kubernetes.io/projected/e6c717e9-efbf-49cc-b817-198682317a0f-kube-api-access-lzjvr\") pod \"coredns-674b8bbfcf-xk4gx\" (UID: \"e6c717e9-efbf-49cc-b817-198682317a0f\") " pod="kube-system/coredns-674b8bbfcf-xk4gx" Jan 24 00:33:04.904920 kubelet[3194]: I0124 00:33:04.904895 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs2mk\" (UniqueName: \"kubernetes.io/projected/2953039e-0e7f-4027-9a3c-137a03fa2153-kube-api-access-bs2mk\") pod \"goldmane-666569f655-t69bh\" (UID: \"2953039e-0e7f-4027-9a3c-137a03fa2153\") " pod="calico-system/goldmane-666569f655-t69bh" Jan 24 00:33:04.905070 kubelet[3194]: I0124 00:33:04.905056 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/44356a3b-6e7e-4852-a5bd-fffe6e033ca3-calico-apiserver-certs\") pod \"calico-apiserver-59955b8999-49mgw\" (UID: \"44356a3b-6e7e-4852-a5bd-fffe6e033ca3\") " pod="calico-apiserver/calico-apiserver-59955b8999-49mgw" Jan 24 00:33:04.905339 kubelet[3194]: I0124 00:33:04.905297 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr42q\" (UniqueName: \"kubernetes.io/projected/44356a3b-6e7e-4852-a5bd-fffe6e033ca3-kube-api-access-zr42q\") pod \"calico-apiserver-59955b8999-49mgw\" (UID: \"44356a3b-6e7e-4852-a5bd-fffe6e033ca3\") " pod="calico-apiserver/calico-apiserver-59955b8999-49mgw" Jan 24 00:33:04.907358 kubelet[3194]: I0124 00:33:04.905479 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e0ff43df-473e-4ec1-a924-95f6d305484f-whisker-backend-key-pair\") pod \"whisker-7f66bf4696-t77qb\" (UID: \"e0ff43df-473e-4ec1-a924-95f6d305484f\") " pod="calico-system/whisker-7f66bf4696-t77qb" Jan 24 00:33:04.910004 kubelet[3194]: I0124 00:33:04.909964 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0ff43df-473e-4ec1-a924-95f6d305484f-whisker-ca-bundle\") pod \"whisker-7f66bf4696-t77qb\" (UID: \"e0ff43df-473e-4ec1-a924-95f6d305484f\") " pod="calico-system/whisker-7f66bf4696-t77qb" Jan 24 00:33:04.910199 kubelet[3194]: I0124 00:33:04.910173 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr4vw\" (UniqueName: \"kubernetes.io/projected/0c7963cb-5f76-453d-b9ca-f28ed3f17ce0-kube-api-access-jr4vw\") pod \"calico-kube-controllers-777f8fb74-d8qgp\" (UID: \"0c7963cb-5f76-453d-b9ca-f28ed3f17ce0\") " pod="calico-system/calico-kube-controllers-777f8fb74-d8qgp" Jan 24 00:33:04.910332 kubelet[3194]: I0124 00:33:04.910318 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6c717e9-efbf-49cc-b817-198682317a0f-config-volume\") pod \"coredns-674b8bbfcf-xk4gx\" (UID: \"e6c717e9-efbf-49cc-b817-198682317a0f\") " pod="kube-system/coredns-674b8bbfcf-xk4gx" Jan 24 00:33:04.910459 kubelet[3194]: I0124 00:33:04.910446 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d604c20-234e-4790-af46-e3ccb6ebbab2-config-volume\") pod \"coredns-674b8bbfcf-vpf5l\" (UID: \"8d604c20-234e-4790-af46-e3ccb6ebbab2\") " pod="kube-system/coredns-674b8bbfcf-vpf5l" Jan 24 00:33:04.911674 kubelet[3194]: I0124 00:33:04.911467 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2953039e-0e7f-4027-9a3c-137a03fa2153-goldmane-ca-bundle\") pod \"goldmane-666569f655-t69bh\" (UID: \"2953039e-0e7f-4027-9a3c-137a03fa2153\") " pod="calico-system/goldmane-666569f655-t69bh" Jan 24 00:33:04.911914 kubelet[3194]: I0124 00:33:04.911894 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4x9v\" (UniqueName: \"kubernetes.io/projected/e0ff43df-473e-4ec1-a924-95f6d305484f-kube-api-access-r4x9v\") pod \"whisker-7f66bf4696-t77qb\" (UID: \"e0ff43df-473e-4ec1-a924-95f6d305484f\") " pod="calico-system/whisker-7f66bf4696-t77qb" Jan 24 00:33:04.914189 systemd[1]: Created slice kubepods-besteffort-pod2953039e_0e7f_4027_9a3c_137a03fa2153.slice - libcontainer container kubepods-besteffort-pod2953039e_0e7f_4027_9a3c_137a03fa2153.slice. Jan 24 00:33:04.916022 kubelet[3194]: I0124 00:33:04.915990 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7djh\" (UniqueName: \"kubernetes.io/projected/8bd1b1e2-6c2f-496d-84df-3687f4a4a992-kube-api-access-j7djh\") pod \"calico-apiserver-59955b8999-7njzz\" (UID: \"8bd1b1e2-6c2f-496d-84df-3687f4a4a992\") " pod="calico-apiserver/calico-apiserver-59955b8999-7njzz" Jan 24 00:33:04.916234 kubelet[3194]: I0124 00:33:04.916168 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2953039e-0e7f-4027-9a3c-137a03fa2153-goldmane-key-pair\") pod \"goldmane-666569f655-t69bh\" (UID: \"2953039e-0e7f-4027-9a3c-137a03fa2153\") " pod="calico-system/goldmane-666569f655-t69bh" Jan 24 00:33:04.918264 kubelet[3194]: I0124 00:33:04.917704 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8bd1b1e2-6c2f-496d-84df-3687f4a4a992-calico-apiserver-certs\") pod \"calico-apiserver-59955b8999-7njzz\" (UID: \"8bd1b1e2-6c2f-496d-84df-3687f4a4a992\") " pod="calico-apiserver/calico-apiserver-59955b8999-7njzz" Jan 24 00:33:04.918352 kubelet[3194]: I0124 00:33:04.918287 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d8c2\" (UniqueName: \"kubernetes.io/projected/8d604c20-234e-4790-af46-e3ccb6ebbab2-kube-api-access-6d8c2\") pod \"coredns-674b8bbfcf-vpf5l\" (UID: \"8d604c20-234e-4790-af46-e3ccb6ebbab2\") " pod="kube-system/coredns-674b8bbfcf-vpf5l" Jan 24 00:33:04.926168 kubelet[3194]: I0124 00:33:04.918619 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2953039e-0e7f-4027-9a3c-137a03fa2153-config\") pod \"goldmane-666569f655-t69bh\" (UID: \"2953039e-0e7f-4027-9a3c-137a03fa2153\") " pod="calico-system/goldmane-666569f655-t69bh" Jan 24 00:33:04.933836 systemd[1]: Created slice kubepods-besteffort-pod0c7963cb_5f76_453d_b9ca_f28ed3f17ce0.slice - libcontainer container kubepods-besteffort-pod0c7963cb_5f76_453d_b9ca_f28ed3f17ce0.slice. Jan 24 00:33:05.108836 containerd[1990]: time="2026-01-24T00:33:05.108427536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xk4gx,Uid:e6c717e9-efbf-49cc-b817-198682317a0f,Namespace:kube-system,Attempt:0,}" Jan 24 00:33:05.135977 containerd[1990]: time="2026-01-24T00:33:05.135307643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f66bf4696-t77qb,Uid:e0ff43df-473e-4ec1-a924-95f6d305484f,Namespace:calico-system,Attempt:0,}" Jan 24 00:33:05.175917 containerd[1990]: time="2026-01-24T00:33:05.175877806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59955b8999-7njzz,Uid:8bd1b1e2-6c2f-496d-84df-3687f4a4a992,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:33:05.199888 containerd[1990]: time="2026-01-24T00:33:05.199841412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59955b8999-49mgw,Uid:44356a3b-6e7e-4852-a5bd-fffe6e033ca3,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:33:05.229482 containerd[1990]: time="2026-01-24T00:33:05.229435900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-t69bh,Uid:2953039e-0e7f-4027-9a3c-137a03fa2153,Namespace:calico-system,Attempt:0,}" Jan 24 00:33:05.254873 containerd[1990]: time="2026-01-24T00:33:05.254835296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-777f8fb74-d8qgp,Uid:0c7963cb-5f76-453d-b9ca-f28ed3f17ce0,Namespace:calico-system,Attempt:0,}" Jan 24 00:33:05.389019 containerd[1990]: time="2026-01-24T00:33:05.388721762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vpf5l,Uid:8d604c20-234e-4790-af46-e3ccb6ebbab2,Namespace:kube-system,Attempt:0,}" Jan 24 00:33:11.428015 containerd[1990]: time="2026-01-24T00:33:11.427919069Z" level=error msg="Failed to destroy network for sandbox \"8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:11.430308 containerd[1990]: time="2026-01-24T00:33:11.428723295Z" level=error msg="encountered an error cleaning up failed sandbox \"8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:11.430308 containerd[1990]: time="2026-01-24T00:33:11.428794861Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-t69bh,Uid:2953039e-0e7f-4027-9a3c-137a03fa2153,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:11.437560 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69-shm.mount: Deactivated successfully. Jan 24 00:33:11.463245 containerd[1990]: time="2026-01-24T00:33:11.461917049Z" level=error msg="Failed to destroy network for sandbox \"9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:11.463245 containerd[1990]: time="2026-01-24T00:33:11.462428560Z" level=error msg="encountered an error cleaning up failed sandbox \"9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:11.463245 containerd[1990]: time="2026-01-24T00:33:11.462500484Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-777f8fb74-d8qgp,Uid:0c7963cb-5f76-453d-b9ca-f28ed3f17ce0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:11.463245 containerd[1990]: time="2026-01-24T00:33:11.462616088Z" level=error msg="Failed to destroy network for sandbox \"bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:11.470341 containerd[1990]: time="2026-01-24T00:33:11.470284605Z" level=error msg="encountered an error cleaning up failed sandbox \"bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:11.470482 containerd[1990]: time="2026-01-24T00:33:11.470369249Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f66bf4696-t77qb,Uid:e0ff43df-473e-4ec1-a924-95f6d305484f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:11.472100 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f-shm.mount: Deactivated successfully. Jan 24 00:33:11.473710 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64-shm.mount: Deactivated successfully. Jan 24 00:33:11.494817 containerd[1990]: time="2026-01-24T00:33:11.494770049Z" level=error msg="Failed to destroy network for sandbox \"fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:11.498436 containerd[1990]: time="2026-01-24T00:33:11.497305082Z" level=error msg="encountered an error cleaning up failed sandbox \"fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:11.498436 containerd[1990]: time="2026-01-24T00:33:11.498268393Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59955b8999-49mgw,Uid:44356a3b-6e7e-4852-a5bd-fffe6e033ca3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:11.500473 containerd[1990]: time="2026-01-24T00:33:11.500333723Z" level=error msg="Failed to destroy network for sandbox \"d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:11.501306 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28-shm.mount: Deactivated successfully. Jan 24 00:33:11.506920 containerd[1990]: time="2026-01-24T00:33:11.505644192Z" level=error msg="encountered an error cleaning up failed sandbox \"d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:11.506920 containerd[1990]: time="2026-01-24T00:33:11.505729757Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zgckl,Uid:6dea86f8-2783-4942-8476-4f769af7b22d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:11.507835 containerd[1990]: time="2026-01-24T00:33:11.507674257Z" level=error msg="Failed to destroy network for sandbox \"8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:11.510185 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c-shm.mount: Deactivated successfully. Jan 24 00:33:11.511935 kubelet[3194]: E0124 00:33:11.511824 3194 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:11.512622 containerd[1990]: time="2026-01-24T00:33:11.509337236Z" level=error msg="Failed to destroy network for sandbox \"0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:11.512899 kubelet[3194]: E0124 00:33:11.512860 3194 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:11.514544 containerd[1990]: time="2026-01-24T00:33:11.514290240Z" level=error msg="encountered an error cleaning up failed sandbox \"0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:11.514544 containerd[1990]: time="2026-01-24T00:33:11.514389485Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vpf5l,Uid:8d604c20-234e-4790-af46-e3ccb6ebbab2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:11.514544 containerd[1990]: time="2026-01-24T00:33:11.514428119Z" level=error msg="encountered an error cleaning up failed sandbox \"8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:11.514544 containerd[1990]: time="2026-01-24T00:33:11.514466725Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59955b8999-7njzz,Uid:8bd1b1e2-6c2f-496d-84df-3687f4a4a992,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:11.515241 kubelet[3194]: E0124 00:33:11.515202 3194 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-777f8fb74-d8qgp" Jan 24 00:33:11.517886 kubelet[3194]: E0124 00:33:11.517741 3194 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zgckl" Jan 24 00:33:11.517977 containerd[1990]: time="2026-01-24T00:33:11.517777869Z" level=error msg="Failed to destroy network for sandbox \"8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:11.519676 kubelet[3194]: E0124 00:33:11.519637 3194 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zgckl" Jan 24 00:33:11.519858 kubelet[3194]: E0124 00:33:11.519737 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zgckl_calico-system(6dea86f8-2783-4942-8476-4f769af7b22d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zgckl_calico-system(6dea86f8-2783-4942-8476-4f769af7b22d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zgckl" podUID="6dea86f8-2783-4942-8476-4f769af7b22d" Jan 24 00:33:11.523205 kubelet[3194]: E0124 00:33:11.520898 3194 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-777f8fb74-d8qgp" Jan 24 00:33:11.523205 kubelet[3194]: E0124 00:33:11.520983 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-777f8fb74-d8qgp_calico-system(0c7963cb-5f76-453d-b9ca-f28ed3f17ce0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-777f8fb74-d8qgp_calico-system(0c7963cb-5f76-453d-b9ca-f28ed3f17ce0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-777f8fb74-d8qgp" podUID="0c7963cb-5f76-453d-b9ca-f28ed3f17ce0" Jan 24 00:33:11.523205 kubelet[3194]: E0124 00:33:11.521123 3194 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:11.523464 kubelet[3194]: E0124 00:33:11.521403 3194 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59955b8999-7njzz" Jan 24 00:33:11.523464 kubelet[3194]: E0124 00:33:11.521453 3194 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59955b8999-7njzz" Jan 24 00:33:11.523464 kubelet[3194]: E0124 00:33:11.521499 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59955b8999-7njzz_calico-apiserver(8bd1b1e2-6c2f-496d-84df-3687f4a4a992)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59955b8999-7njzz_calico-apiserver(8bd1b1e2-6c2f-496d-84df-3687f4a4a992)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59955b8999-7njzz" podUID="8bd1b1e2-6c2f-496d-84df-3687f4a4a992" Jan 24 00:33:11.523621 kubelet[3194]: E0124 00:33:11.521351 3194 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:11.523621 kubelet[3194]: E0124 00:33:11.521539 3194 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vpf5l" Jan 24 00:33:11.523621 kubelet[3194]: E0124 00:33:11.521555 3194 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vpf5l" Jan 24 00:33:11.523751 kubelet[3194]: E0124 00:33:11.521590 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vpf5l_kube-system(8d604c20-234e-4790-af46-e3ccb6ebbab2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vpf5l_kube-system(8d604c20-234e-4790-af46-e3ccb6ebbab2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vpf5l" podUID="8d604c20-234e-4790-af46-e3ccb6ebbab2" Jan 24 00:33:11.523751 kubelet[3194]: E0124 00:33:11.521280 3194 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:11.523751 kubelet[3194]: E0124 00:33:11.521622 3194 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-t69bh" Jan 24 00:33:11.609028 kubelet[3194]: E0124 00:33:11.521639 3194 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-t69bh" Jan 24 00:33:11.609028 kubelet[3194]: E0124 00:33:11.521677 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-t69bh_calico-system(2953039e-0e7f-4027-9a3c-137a03fa2153)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-t69bh_calico-system(2953039e-0e7f-4027-9a3c-137a03fa2153)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-t69bh" podUID="2953039e-0e7f-4027-9a3c-137a03fa2153" Jan 24 00:33:11.609028 kubelet[3194]: E0124 00:33:11.521299 3194 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:11.694276 containerd[1990]: time="2026-01-24T00:33:11.559875132Z" level=error msg="encountered an error cleaning up failed sandbox \"8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:11.694276 containerd[1990]: time="2026-01-24T00:33:11.559966599Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xk4gx,Uid:e6c717e9-efbf-49cc-b817-198682317a0f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:11.694396 kubelet[3194]: E0124 00:33:11.521710 3194 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f66bf4696-t77qb" Jan 24 00:33:11.694396 kubelet[3194]: E0124 00:33:11.521729 3194 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f66bf4696-t77qb" Jan 24 00:33:11.694396 kubelet[3194]: E0124 00:33:11.521765 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7f66bf4696-t77qb_calico-system(e0ff43df-473e-4ec1-a924-95f6d305484f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7f66bf4696-t77qb_calico-system(e0ff43df-473e-4ec1-a924-95f6d305484f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f66bf4696-t77qb" podUID="e0ff43df-473e-4ec1-a924-95f6d305484f" Jan 24 00:33:11.694548 kubelet[3194]: E0124 00:33:11.521325 3194 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:11.694548 kubelet[3194]: E0124 00:33:11.521812 3194 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59955b8999-49mgw" Jan 24 00:33:11.694548 kubelet[3194]: E0124 00:33:11.521839 3194 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59955b8999-49mgw" Jan 24 00:33:11.694634 kubelet[3194]: E0124 00:33:11.521873 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59955b8999-49mgw_calico-apiserver(44356a3b-6e7e-4852-a5bd-fffe6e033ca3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59955b8999-49mgw_calico-apiserver(44356a3b-6e7e-4852-a5bd-fffe6e033ca3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59955b8999-49mgw" podUID="44356a3b-6e7e-4852-a5bd-fffe6e033ca3" Jan 24 00:33:11.694634 kubelet[3194]: E0124 00:33:11.560876 3194 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:11.694634 kubelet[3194]: E0124 00:33:11.560924 3194 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xk4gx" Jan 24 00:33:11.694751 kubelet[3194]: E0124 00:33:11.560946 3194 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xk4gx" Jan 24 00:33:11.694751 kubelet[3194]: E0124 00:33:11.560990 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-xk4gx_kube-system(e6c717e9-efbf-49cc-b817-198682317a0f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-xk4gx_kube-system(e6c717e9-efbf-49cc-b817-198682317a0f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xk4gx" podUID="e6c717e9-efbf-49cc-b817-198682317a0f" Jan 24 00:33:11.874563 kubelet[3194]: I0124 00:33:11.874527 3194 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" Jan 24 00:33:11.878329 kubelet[3194]: I0124 00:33:11.878289 3194 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" Jan 24 00:33:11.902565 kubelet[3194]: I0124 00:33:11.902249 3194 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" Jan 24 00:33:11.908304 kubelet[3194]: I0124 00:33:11.908275 3194 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" Jan 24 00:33:11.934802 kubelet[3194]: I0124 00:33:11.934665 3194 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" Jan 24 00:33:11.941977 kubelet[3194]: I0124 00:33:11.941713 3194 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" Jan 24 00:33:11.948685 kubelet[3194]: I0124 00:33:11.948501 3194 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" Jan 24 00:33:11.991891 kubelet[3194]: I0124 00:33:11.990708 3194 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" Jan 24 00:33:12.023472 containerd[1990]: time="2026-01-24T00:33:12.023285120Z" level=info msg="StopPodSandbox for \"8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69\"" Jan 24 00:33:12.134914 containerd[1990]: time="2026-01-24T00:33:12.024078459Z" level=info msg="StopPodSandbox for \"8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d\"" Jan 24 00:33:12.134914 containerd[1990]: time="2026-01-24T00:33:12.029559183Z" level=info msg="Ensure that sandbox 8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69 in task-service has been cleanup successfully" Jan 24 00:33:12.134914 containerd[1990]: time="2026-01-24T00:33:12.029639704Z" level=info msg="StopPodSandbox for \"9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f\"" Jan 24 00:33:12.134914 containerd[1990]: time="2026-01-24T00:33:12.035600950Z" level=info msg="StopPodSandbox for \"fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28\"" Jan 24 00:33:12.134914 containerd[1990]: time="2026-01-24T00:33:12.037785162Z" level=info msg="Ensure that sandbox fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28 in task-service has been cleanup successfully" Jan 24 00:33:12.134914 containerd[1990]: time="2026-01-24T00:33:12.029562714Z" level=info msg="Ensure that sandbox 8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d in task-service has been cleanup successfully" Jan 24 00:33:12.134914 containerd[1990]: time="2026-01-24T00:33:12.045583243Z" level=info msg="StopPodSandbox for \"d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c\"" Jan 24 00:33:12.134914 containerd[1990]: time="2026-01-24T00:33:12.046280196Z" level=info msg="Ensure that sandbox d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c in task-service has been cleanup successfully" Jan 24 00:33:12.134914 containerd[1990]: time="2026-01-24T00:33:12.047266491Z" level=info msg="StopPodSandbox for \"8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35\"" Jan 24 00:33:12.134914 containerd[1990]: time="2026-01-24T00:33:12.049181852Z" level=info msg="Ensure that sandbox 8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35 in task-service has been cleanup successfully" Jan 24 00:33:12.134914 containerd[1990]: time="2026-01-24T00:33:12.049735335Z" level=info msg="Ensure that sandbox 9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f in task-service has been cleanup successfully" Jan 24 00:33:12.134914 containerd[1990]: time="2026-01-24T00:33:12.051090984Z" level=info msg="StopPodSandbox for \"bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64\"" Jan 24 00:33:12.134914 containerd[1990]: time="2026-01-24T00:33:12.051608973Z" level=info msg="StopPodSandbox for \"0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e\"" Jan 24 00:33:12.134914 containerd[1990]: time="2026-01-24T00:33:12.054350533Z" level=info msg="Ensure that sandbox 0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e in task-service has been cleanup successfully" Jan 24 00:33:12.134914 containerd[1990]: time="2026-01-24T00:33:12.061512856Z" level=info msg="Ensure that sandbox bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64 in task-service has been cleanup successfully" Jan 24 00:33:12.263396 containerd[1990]: time="2026-01-24T00:33:12.263269817Z" level=error msg="StopPodSandbox for \"9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f\" failed" error="failed to destroy network for sandbox \"9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:12.268527 kubelet[3194]: E0124 00:33:12.268480 3194 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" Jan 24 00:33:12.268817 kubelet[3194]: E0124 00:33:12.268760 3194 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f"} Jan 24 00:33:12.272163 kubelet[3194]: E0124 00:33:12.268929 3194 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0c7963cb-5f76-453d-b9ca-f28ed3f17ce0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:33:12.272163 kubelet[3194]: E0124 00:33:12.268964 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0c7963cb-5f76-453d-b9ca-f28ed3f17ce0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-777f8fb74-d8qgp" podUID="0c7963cb-5f76-453d-b9ca-f28ed3f17ce0" Jan 24 00:33:12.291089 containerd[1990]: time="2026-01-24T00:33:12.291038021Z" level=error msg="StopPodSandbox for \"8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d\" failed" error="failed to destroy network for sandbox \"8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:12.291661 kubelet[3194]: E0124 00:33:12.291461 3194 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" Jan 24 00:33:12.291661 kubelet[3194]: E0124 00:33:12.291534 3194 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d"} Jan 24 00:33:12.291661 kubelet[3194]: E0124 00:33:12.291581 3194 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8bd1b1e2-6c2f-496d-84df-3687f4a4a992\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:33:12.291661 kubelet[3194]: E0124 00:33:12.291612 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8bd1b1e2-6c2f-496d-84df-3687f4a4a992\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59955b8999-7njzz" podUID="8bd1b1e2-6c2f-496d-84df-3687f4a4a992" Jan 24 00:33:12.298246 containerd[1990]: time="2026-01-24T00:33:12.296319897Z" level=error msg="StopPodSandbox for \"d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c\" failed" error="failed to destroy network for sandbox \"d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:12.298246 containerd[1990]: time="2026-01-24T00:33:12.297347610Z" level=error msg="StopPodSandbox for \"0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e\" failed" error="failed to destroy network for sandbox \"0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:12.298426 kubelet[3194]: E0124 00:33:12.297896 3194 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" Jan 24 00:33:12.298426 kubelet[3194]: E0124 00:33:12.297979 3194 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e"} Jan 24 00:33:12.298426 kubelet[3194]: E0124 00:33:12.298021 3194 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8d604c20-234e-4790-af46-e3ccb6ebbab2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:33:12.298426 kubelet[3194]: E0124 00:33:12.298051 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8d604c20-234e-4790-af46-e3ccb6ebbab2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vpf5l" podUID="8d604c20-234e-4790-af46-e3ccb6ebbab2" Jan 24 00:33:12.298716 kubelet[3194]: E0124 00:33:12.298093 3194 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" Jan 24 00:33:12.298716 kubelet[3194]: E0124 00:33:12.298116 3194 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c"} Jan 24 00:33:12.298716 kubelet[3194]: E0124 00:33:12.298162 3194 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6dea86f8-2783-4942-8476-4f769af7b22d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:33:12.298716 kubelet[3194]: E0124 00:33:12.298187 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6dea86f8-2783-4942-8476-4f769af7b22d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zgckl" podUID="6dea86f8-2783-4942-8476-4f769af7b22d" Jan 24 00:33:12.304667 containerd[1990]: time="2026-01-24T00:33:12.304613114Z" level=error msg="StopPodSandbox for \"8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69\" failed" error="failed to destroy network for sandbox \"8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:12.304884 kubelet[3194]: E0124 00:33:12.304843 3194 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" Jan 24 00:33:12.304968 kubelet[3194]: E0124 00:33:12.304904 3194 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69"} Jan 24 00:33:12.305648 kubelet[3194]: E0124 00:33:12.304954 3194 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2953039e-0e7f-4027-9a3c-137a03fa2153\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:33:12.305648 kubelet[3194]: E0124 00:33:12.304995 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2953039e-0e7f-4027-9a3c-137a03fa2153\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-t69bh" podUID="2953039e-0e7f-4027-9a3c-137a03fa2153" Jan 24 00:33:12.305853 containerd[1990]: time="2026-01-24T00:33:12.305810985Z" level=error msg="StopPodSandbox for \"bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64\" failed" error="failed to destroy network for sandbox \"bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:12.306092 kubelet[3194]: E0124 00:33:12.306054 3194 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" Jan 24 00:33:12.306174 kubelet[3194]: E0124 00:33:12.306104 3194 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64"} Jan 24 00:33:12.307944 containerd[1990]: time="2026-01-24T00:33:12.307898688Z" level=error msg="StopPodSandbox for \"8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35\" failed" error="failed to destroy network for sandbox \"8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:12.309243 kubelet[3194]: E0124 00:33:12.309199 3194 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" Jan 24 00:33:12.309327 kubelet[3194]: E0124 00:33:12.309261 3194 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35"} Jan 24 00:33:12.309327 kubelet[3194]: E0124 00:33:12.309300 3194 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e6c717e9-efbf-49cc-b817-198682317a0f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:33:12.309458 kubelet[3194]: E0124 00:33:12.309336 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e6c717e9-efbf-49cc-b817-198682317a0f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xk4gx" podUID="e6c717e9-efbf-49cc-b817-198682317a0f" Jan 24 00:33:12.315869 containerd[1990]: time="2026-01-24T00:33:12.315822185Z" level=error msg="StopPodSandbox for \"fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28\" failed" error="failed to destroy network for sandbox \"fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:33:12.317671 kubelet[3194]: E0124 00:33:12.317628 3194 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" Jan 24 00:33:12.317775 kubelet[3194]: E0124 00:33:12.317687 3194 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28"} Jan 24 00:33:12.317775 kubelet[3194]: E0124 00:33:12.317726 3194 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"44356a3b-6e7e-4852-a5bd-fffe6e033ca3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:33:12.317775 kubelet[3194]: E0124 00:33:12.317760 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"44356a3b-6e7e-4852-a5bd-fffe6e033ca3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59955b8999-49mgw" podUID="44356a3b-6e7e-4852-a5bd-fffe6e033ca3" Jan 24 00:33:12.320385 kubelet[3194]: E0124 00:33:12.306176 3194 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e0ff43df-473e-4ec1-a924-95f6d305484f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:33:12.320503 kubelet[3194]: E0124 00:33:12.320418 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e0ff43df-473e-4ec1-a924-95f6d305484f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f66bf4696-t77qb" podUID="e0ff43df-473e-4ec1-a924-95f6d305484f" Jan 24 00:33:12.434079 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e-shm.mount: Deactivated successfully. Jan 24 00:33:12.434902 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d-shm.mount: Deactivated successfully. Jan 24 00:33:12.435001 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35-shm.mount: Deactivated successfully. Jan 24 00:33:13.306369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4199906368.mount: Deactivated successfully. Jan 24 00:33:13.356319 containerd[1990]: time="2026-01-24T00:33:13.356215665Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 24 00:33:13.357917 containerd[1990]: time="2026-01-24T00:33:13.357866364Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:33:13.531259 containerd[1990]: time="2026-01-24T00:33:13.531173915Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:33:13.542911 containerd[1990]: time="2026-01-24T00:33:13.542017742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:33:13.542911 containerd[1990]: time="2026-01-24T00:33:13.542748367Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.694282396s" Jan 24 00:33:13.542911 containerd[1990]: time="2026-01-24T00:33:13.542793499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 24 00:33:13.587189 containerd[1990]: time="2026-01-24T00:33:13.587047849Z" level=info msg="CreateContainer within sandbox \"64d261ff465df55eea2efd24a485120b2e7168fe87ad3002d40c33e70bdd1c09\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 24 00:33:13.656511 containerd[1990]: time="2026-01-24T00:33:13.656447486Z" level=info msg="CreateContainer within sandbox \"64d261ff465df55eea2efd24a485120b2e7168fe87ad3002d40c33e70bdd1c09\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9e76c446f2a60ec59fcc7cb3dd91f46a552d87fbdf03e4641141e50bcd9737d4\"" Jan 24 00:33:13.657380 containerd[1990]: time="2026-01-24T00:33:13.657349924Z" level=info msg="StartContainer for \"9e76c446f2a60ec59fcc7cb3dd91f46a552d87fbdf03e4641141e50bcd9737d4\"" Jan 24 00:33:13.764536 systemd[1]: Started cri-containerd-9e76c446f2a60ec59fcc7cb3dd91f46a552d87fbdf03e4641141e50bcd9737d4.scope - libcontainer container 9e76c446f2a60ec59fcc7cb3dd91f46a552d87fbdf03e4641141e50bcd9737d4. Jan 24 00:33:13.912622 containerd[1990]: time="2026-01-24T00:33:13.912477878Z" level=info msg="StartContainer for \"9e76c446f2a60ec59fcc7cb3dd91f46a552d87fbdf03e4641141e50bcd9737d4\" returns successfully" Jan 24 00:33:14.399522 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 24 00:33:14.401510 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 24 00:33:18.572433 kubelet[3194]: I0124 00:33:18.561305 3194 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-t45xx" podStartSLOduration=5.716769806 podStartE2EDuration="25.561267607s" podCreationTimestamp="2026-01-24 00:32:53 +0000 UTC" firstStartedPulling="2026-01-24 00:32:53.699437027 +0000 UTC m=+54.253392921" lastFinishedPulling="2026-01-24 00:33:13.543934816 +0000 UTC m=+74.097890722" observedRunningTime="2026-01-24 00:33:14.03183603 +0000 UTC m=+74.585791946" watchObservedRunningTime="2026-01-24 00:33:18.561267607 +0000 UTC m=+79.115223524" Jan 24 00:33:18.575510 containerd[1990]: time="2026-01-24T00:33:18.573418528Z" level=info msg="StopPodSandbox for \"bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64\"" Jan 24 00:33:18.729209 kernel: bpftool[4765]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 24 00:33:19.739499 (udev-worker)[4779]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:33:19.823860 systemd-networkd[1622]: vxlan.calico: Link UP Jan 24 00:33:19.823869 systemd-networkd[1622]: vxlan.calico: Gained carrier Jan 24 00:33:20.025744 (udev-worker)[4789]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:33:20.913545 systemd[1]: Started sshd@7-172.31.16.136:22-4.153.228.146:35574.service - OpenSSH per-connection server daemon (4.153.228.146:35574). Jan 24 00:33:21.194380 systemd-networkd[1622]: vxlan.calico: Gained IPv6LL Jan 24 00:33:21.466096 sshd[4845]: Accepted publickey for core from 4.153.228.146 port 35574 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:33:21.469024 sshd[4845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:33:21.475779 systemd-logind[1960]: New session 8 of user core. Jan 24 00:33:21.479332 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 00:33:22.298011 sshd[4845]: pam_unix(sshd:session): session closed for user core Jan 24 00:33:22.300964 systemd[1]: sshd@7-172.31.16.136:22-4.153.228.146:35574.service: Deactivated successfully. Jan 24 00:33:22.303087 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 00:33:22.304483 systemd-logind[1960]: Session 8 logged out. Waiting for processes to exit. Jan 24 00:33:22.305975 systemd-logind[1960]: Removed session 8. Jan 24 00:33:22.604128 containerd[1990]: time="2026-01-24T00:33:22.603934770Z" level=info msg="StopPodSandbox for \"8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d\"" Jan 24 00:33:23.612436 containerd[1990]: time="2026-01-24T00:33:23.612294289Z" level=info msg="StopPodSandbox for \"d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c\"" Jan 24 00:33:24.046739 ntpd[1955]: Listen normally on 7 vxlan.calico 192.168.10.64:123 Jan 24 00:33:24.046814 ntpd[1955]: Listen normally on 8 vxlan.calico [fe80::64f4:30ff:fe28:88b1%4]:123 Jan 24 00:33:24.047681 ntpd[1955]: 24 Jan 00:33:24 ntpd[1955]: Listen normally on 7 vxlan.calico 192.168.10.64:123 Jan 24 00:33:24.047681 ntpd[1955]: 24 Jan 00:33:24 ntpd[1955]: Listen normally on 8 vxlan.calico [fe80::64f4:30ff:fe28:88b1%4]:123 Jan 24 00:33:24.604654 containerd[1990]: time="2026-01-24T00:33:24.604345162Z" level=info msg="StopPodSandbox for \"8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35\"" Jan 24 00:33:24.604854 containerd[1990]: time="2026-01-24T00:33:24.604834118Z" level=info msg="StopPodSandbox for \"9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f\"" Jan 24 00:33:26.604209 containerd[1990]: time="2026-01-24T00:33:26.603983443Z" level=info msg="StopPodSandbox for \"0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e\"" Jan 24 00:33:26.604209 containerd[1990]: time="2026-01-24T00:33:26.604014909Z" level=info msg="StopPodSandbox for \"8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69\"" Jan 24 00:33:26.606294 containerd[1990]: time="2026-01-24T00:33:26.603984128Z" level=info msg="StopPodSandbox for \"fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28\"" Jan 24 00:33:27.396506 systemd[1]: Started sshd@8-172.31.16.136:22-4.153.228.146:55738.service - OpenSSH per-connection server daemon (4.153.228.146:55738). Jan 24 00:33:27.840091 containerd[1990]: 2026-01-24 00:33:26.701 [INFO][4984] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" Jan 24 00:33:27.840091 containerd[1990]: 2026-01-24 00:33:26.702 [INFO][4984] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" iface="eth0" netns="/var/run/netns/cni-33f7c09c-3004-9f74-28c3-03cdde8c4146" Jan 24 00:33:27.840091 containerd[1990]: 2026-01-24 00:33:26.702 [INFO][4984] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" iface="eth0" netns="/var/run/netns/cni-33f7c09c-3004-9f74-28c3-03cdde8c4146" Jan 24 00:33:27.840091 containerd[1990]: 2026-01-24 00:33:26.703 [INFO][4984] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" iface="eth0" netns="/var/run/netns/cni-33f7c09c-3004-9f74-28c3-03cdde8c4146" Jan 24 00:33:27.840091 containerd[1990]: 2026-01-24 00:33:26.703 [INFO][4984] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" Jan 24 00:33:27.840091 containerd[1990]: 2026-01-24 00:33:26.703 [INFO][4984] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" Jan 24 00:33:27.840091 containerd[1990]: 2026-01-24 00:33:27.818 [INFO][5011] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" HandleID="k8s-pod-network.0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" Workload="ip--172--31--16--136-k8s-coredns--674b8bbfcf--vpf5l-eth0" Jan 24 00:33:27.840091 containerd[1990]: 2026-01-24 00:33:27.820 [INFO][5011] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:27.840091 containerd[1990]: 2026-01-24 00:33:27.820 [INFO][5011] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:27.840091 containerd[1990]: 2026-01-24 00:33:27.831 [WARNING][5011] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" HandleID="k8s-pod-network.0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" Workload="ip--172--31--16--136-k8s-coredns--674b8bbfcf--vpf5l-eth0" Jan 24 00:33:27.840091 containerd[1990]: 2026-01-24 00:33:27.831 [INFO][5011] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" HandleID="k8s-pod-network.0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" Workload="ip--172--31--16--136-k8s-coredns--674b8bbfcf--vpf5l-eth0" Jan 24 00:33:27.840091 containerd[1990]: 2026-01-24 00:33:27.833 [INFO][5011] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:27.840091 containerd[1990]: 2026-01-24 00:33:27.836 [INFO][4984] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" Jan 24 00:33:27.841276 containerd[1990]: time="2026-01-24T00:33:27.841084805Z" level=info msg="TearDown network for sandbox \"0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e\" successfully" Jan 24 00:33:27.841276 containerd[1990]: time="2026-01-24T00:33:27.841122285Z" level=info msg="StopPodSandbox for \"0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e\" returns successfully" Jan 24 00:33:27.846475 systemd[1]: run-netns-cni\x2d33f7c09c\x2d3004\x2d9f74\x2d28c3\x2d03cdde8c4146.mount: Deactivated successfully. Jan 24 00:33:27.861374 containerd[1990]: 2026-01-24 00:33:24.666 [INFO][4921] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" Jan 24 00:33:27.861374 containerd[1990]: 2026-01-24 00:33:24.666 [INFO][4921] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" iface="eth0" netns="/var/run/netns/cni-fd3aa853-3441-daee-81da-b770bd060691" Jan 24 00:33:27.861374 containerd[1990]: 2026-01-24 00:33:24.667 [INFO][4921] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" iface="eth0" netns="/var/run/netns/cni-fd3aa853-3441-daee-81da-b770bd060691" Jan 24 00:33:27.861374 containerd[1990]: 2026-01-24 00:33:24.672 [INFO][4921] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" iface="eth0" netns="/var/run/netns/cni-fd3aa853-3441-daee-81da-b770bd060691" Jan 24 00:33:27.861374 containerd[1990]: 2026-01-24 00:33:24.672 [INFO][4921] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" Jan 24 00:33:27.861374 containerd[1990]: 2026-01-24 00:33:24.672 [INFO][4921] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" Jan 24 00:33:27.861374 containerd[1990]: 2026-01-24 00:33:27.818 [INFO][4939] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" HandleID="k8s-pod-network.9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" Workload="ip--172--31--16--136-k8s-calico--kube--controllers--777f8fb74--d8qgp-eth0" Jan 24 00:33:27.861374 containerd[1990]: 2026-01-24 00:33:27.820 [INFO][4939] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:27.861374 containerd[1990]: 2026-01-24 00:33:27.833 [INFO][4939] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:27.861374 containerd[1990]: 2026-01-24 00:33:27.845 [WARNING][4939] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" HandleID="k8s-pod-network.9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" Workload="ip--172--31--16--136-k8s-calico--kube--controllers--777f8fb74--d8qgp-eth0" Jan 24 00:33:27.861374 containerd[1990]: 2026-01-24 00:33:27.846 [INFO][4939] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" HandleID="k8s-pod-network.9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" Workload="ip--172--31--16--136-k8s-calico--kube--controllers--777f8fb74--d8qgp-eth0" Jan 24 00:33:27.861374 containerd[1990]: 2026-01-24 00:33:27.849 [INFO][4939] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:27.861374 containerd[1990]: 2026-01-24 00:33:27.856 [INFO][4921] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" Jan 24 00:33:27.866171 containerd[1990]: time="2026-01-24T00:33:27.862655875Z" level=info msg="TearDown network for sandbox \"9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f\" successfully" Jan 24 00:33:27.866171 containerd[1990]: time="2026-01-24T00:33:27.862692318Z" level=info msg="StopPodSandbox for \"9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f\" returns successfully" Jan 24 00:33:27.866171 containerd[1990]: time="2026-01-24T00:33:27.864359806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vpf5l,Uid:8d604c20-234e-4790-af46-e3ccb6ebbab2,Namespace:kube-system,Attempt:1,}" Jan 24 00:33:27.868529 systemd[1]: run-netns-cni\x2dfd3aa853\x2d3441\x2ddaee\x2d81da\x2db770bd060691.mount: Deactivated successfully. Jan 24 00:33:27.871008 containerd[1990]: time="2026-01-24T00:33:27.870962050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-777f8fb74-d8qgp,Uid:0c7963cb-5f76-453d-b9ca-f28ed3f17ce0,Namespace:calico-system,Attempt:1,}" Jan 24 00:33:27.887097 containerd[1990]: 2026-01-24 00:33:22.751 [INFO][4871] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" Jan 24 00:33:27.887097 containerd[1990]: 2026-01-24 00:33:22.751 [INFO][4871] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" iface="eth0" netns="/var/run/netns/cni-1245516f-ae4a-7d14-c0a9-f66092ba8861" Jan 24 00:33:27.887097 containerd[1990]: 2026-01-24 00:33:22.752 [INFO][4871] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" iface="eth0" netns="/var/run/netns/cni-1245516f-ae4a-7d14-c0a9-f66092ba8861" Jan 24 00:33:27.887097 containerd[1990]: 2026-01-24 00:33:22.752 [INFO][4871] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" iface="eth0" netns="/var/run/netns/cni-1245516f-ae4a-7d14-c0a9-f66092ba8861" Jan 24 00:33:27.887097 containerd[1990]: 2026-01-24 00:33:22.752 [INFO][4871] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" Jan 24 00:33:27.887097 containerd[1990]: 2026-01-24 00:33:22.752 [INFO][4871] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" Jan 24 00:33:27.887097 containerd[1990]: 2026-01-24 00:33:27.818 [INFO][4878] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" HandleID="k8s-pod-network.8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" Workload="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--7njzz-eth0" Jan 24 00:33:27.887097 containerd[1990]: 2026-01-24 00:33:27.821 [INFO][4878] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:27.887097 containerd[1990]: 2026-01-24 00:33:27.851 [INFO][4878] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:27.887097 containerd[1990]: 2026-01-24 00:33:27.871 [WARNING][4878] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" HandleID="k8s-pod-network.8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" Workload="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--7njzz-eth0" Jan 24 00:33:27.887097 containerd[1990]: 2026-01-24 00:33:27.871 [INFO][4878] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" HandleID="k8s-pod-network.8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" Workload="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--7njzz-eth0" Jan 24 00:33:27.887097 containerd[1990]: 2026-01-24 00:33:27.874 [INFO][4878] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:27.887097 containerd[1990]: 2026-01-24 00:33:27.882 [INFO][4871] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" Jan 24 00:33:27.904592 containerd[1990]: time="2026-01-24T00:33:27.904129494Z" level=info msg="TearDown network for sandbox \"8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d\" successfully" Jan 24 00:33:27.904592 containerd[1990]: time="2026-01-24T00:33:27.904208398Z" level=info msg="StopPodSandbox for \"8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d\" returns successfully" Jan 24 00:33:27.917803 containerd[1990]: time="2026-01-24T00:33:27.917721203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59955b8999-7njzz,Uid:8bd1b1e2-6c2f-496d-84df-3687f4a4a992,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:33:27.921133 systemd[1]: run-netns-cni\x2d1245516f\x2dae4a\x2d7d14\x2dc0a9\x2df66092ba8861.mount: Deactivated successfully. Jan 24 00:33:27.935049 containerd[1990]: 2026-01-24 00:33:26.678 [INFO][4983] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" Jan 24 00:33:27.935049 containerd[1990]: 2026-01-24 00:33:26.679 [INFO][4983] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" iface="eth0" netns="/var/run/netns/cni-e3eb8e0e-ade0-fb99-0a8a-e19139a9c9d7" Jan 24 00:33:27.935049 containerd[1990]: 2026-01-24 00:33:26.680 [INFO][4983] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" iface="eth0" netns="/var/run/netns/cni-e3eb8e0e-ade0-fb99-0a8a-e19139a9c9d7" Jan 24 00:33:27.935049 containerd[1990]: 2026-01-24 00:33:26.683 [INFO][4983] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" iface="eth0" netns="/var/run/netns/cni-e3eb8e0e-ade0-fb99-0a8a-e19139a9c9d7" Jan 24 00:33:27.935049 containerd[1990]: 2026-01-24 00:33:26.683 [INFO][4983] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" Jan 24 00:33:27.935049 containerd[1990]: 2026-01-24 00:33:26.683 [INFO][4983] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" Jan 24 00:33:27.935049 containerd[1990]: 2026-01-24 00:33:27.819 [INFO][5001] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" HandleID="k8s-pod-network.fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" Workload="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--49mgw-eth0" Jan 24 00:33:27.935049 containerd[1990]: 2026-01-24 00:33:27.820 [INFO][5001] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:27.935049 containerd[1990]: 2026-01-24 00:33:27.874 [INFO][5001] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:27.935049 containerd[1990]: 2026-01-24 00:33:27.888 [WARNING][5001] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" HandleID="k8s-pod-network.fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" Workload="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--49mgw-eth0" Jan 24 00:33:27.935049 containerd[1990]: 2026-01-24 00:33:27.888 [INFO][5001] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" HandleID="k8s-pod-network.fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" Workload="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--49mgw-eth0" Jan 24 00:33:27.935049 containerd[1990]: 2026-01-24 00:33:27.897 [INFO][5001] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:27.935049 containerd[1990]: 2026-01-24 00:33:27.906 [INFO][4983] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" Jan 24 00:33:27.946477 containerd[1990]: time="2026-01-24T00:33:27.938225394Z" level=info msg="TearDown network for sandbox \"fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28\" successfully" Jan 24 00:33:27.946477 containerd[1990]: time="2026-01-24T00:33:27.945526516Z" level=info msg="StopPodSandbox for \"fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28\" returns successfully" Jan 24 00:33:27.961906 systemd[1]: run-netns-cni\x2de3eb8e0e\x2dade0\x2dfb99\x2d0a8a\x2de19139a9c9d7.mount: Deactivated successfully. Jan 24 00:33:27.973689 sshd[5025]: Accepted publickey for core from 4.153.228.146 port 55738 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:33:27.979693 containerd[1990]: time="2026-01-24T00:33:27.979654783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59955b8999-49mgw,Uid:44356a3b-6e7e-4852-a5bd-fffe6e033ca3,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:33:27.983455 sshd[5025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:33:27.999433 systemd-logind[1960]: New session 9 of user core. Jan 24 00:33:28.003659 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 00:33:28.010806 containerd[1990]: 2026-01-24 00:33:26.695 [INFO][4976] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" Jan 24 00:33:28.010806 containerd[1990]: 2026-01-24 00:33:26.695 [INFO][4976] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" iface="eth0" netns="/var/run/netns/cni-97ba9d28-3eed-d42b-2cf7-84fe03577cf3" Jan 24 00:33:28.010806 containerd[1990]: 2026-01-24 00:33:26.696 [INFO][4976] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" iface="eth0" netns="/var/run/netns/cni-97ba9d28-3eed-d42b-2cf7-84fe03577cf3" Jan 24 00:33:28.010806 containerd[1990]: 2026-01-24 00:33:26.697 [INFO][4976] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" iface="eth0" netns="/var/run/netns/cni-97ba9d28-3eed-d42b-2cf7-84fe03577cf3" Jan 24 00:33:28.010806 containerd[1990]: 2026-01-24 00:33:26.698 [INFO][4976] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" Jan 24 00:33:28.010806 containerd[1990]: 2026-01-24 00:33:26.698 [INFO][4976] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" Jan 24 00:33:28.010806 containerd[1990]: 2026-01-24 00:33:27.818 [INFO][5006] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" HandleID="k8s-pod-network.8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" Workload="ip--172--31--16--136-k8s-goldmane--666569f655--t69bh-eth0" Jan 24 00:33:28.010806 containerd[1990]: 2026-01-24 00:33:27.820 [INFO][5006] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:28.010806 containerd[1990]: 2026-01-24 00:33:27.895 [INFO][5006] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:28.010806 containerd[1990]: 2026-01-24 00:33:27.927 [WARNING][5006] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" HandleID="k8s-pod-network.8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" Workload="ip--172--31--16--136-k8s-goldmane--666569f655--t69bh-eth0" Jan 24 00:33:28.010806 containerd[1990]: 2026-01-24 00:33:27.927 [INFO][5006] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" HandleID="k8s-pod-network.8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" Workload="ip--172--31--16--136-k8s-goldmane--666569f655--t69bh-eth0" Jan 24 00:33:28.010806 containerd[1990]: 2026-01-24 00:33:27.945 [INFO][5006] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:28.010806 containerd[1990]: 2026-01-24 00:33:27.974 [INFO][4976] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" Jan 24 00:33:28.012517 containerd[1990]: time="2026-01-24T00:33:28.011446131Z" level=info msg="TearDown network for sandbox \"8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69\" successfully" Jan 24 00:33:28.012517 containerd[1990]: time="2026-01-24T00:33:28.011480461Z" level=info msg="StopPodSandbox for \"8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69\" returns successfully" Jan 24 00:33:28.015241 containerd[1990]: time="2026-01-24T00:33:28.014827506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-t69bh,Uid:2953039e-0e7f-4027-9a3c-137a03fa2153,Namespace:calico-system,Attempt:1,}" Jan 24 00:33:28.031120 containerd[1990]: 2026-01-24 00:33:19.593 [INFO][4757] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" Jan 24 00:33:28.031120 containerd[1990]: 2026-01-24 00:33:19.593 [INFO][4757] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" iface="eth0" netns="/var/run/netns/cni-1a153312-f898-b438-d35b-aef494e27e72" Jan 24 00:33:28.031120 containerd[1990]: 2026-01-24 00:33:19.594 [INFO][4757] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" iface="eth0" netns="/var/run/netns/cni-1a153312-f898-b438-d35b-aef494e27e72" Jan 24 00:33:28.031120 containerd[1990]: 2026-01-24 00:33:19.595 [INFO][4757] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" iface="eth0" netns="/var/run/netns/cni-1a153312-f898-b438-d35b-aef494e27e72" Jan 24 00:33:28.031120 containerd[1990]: 2026-01-24 00:33:19.595 [INFO][4757] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" Jan 24 00:33:28.031120 containerd[1990]: 2026-01-24 00:33:19.595 [INFO][4757] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" Jan 24 00:33:28.031120 containerd[1990]: 2026-01-24 00:33:27.817 [INFO][4786] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" HandleID="k8s-pod-network.bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" Workload="ip--172--31--16--136-k8s-whisker--7f66bf4696--t77qb-eth0" Jan 24 00:33:28.031120 containerd[1990]: 2026-01-24 00:33:27.820 [INFO][4786] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:28.031120 containerd[1990]: 2026-01-24 00:33:27.947 [INFO][4786] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:28.031120 containerd[1990]: 2026-01-24 00:33:27.988 [WARNING][4786] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" HandleID="k8s-pod-network.bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" Workload="ip--172--31--16--136-k8s-whisker--7f66bf4696--t77qb-eth0" Jan 24 00:33:28.031120 containerd[1990]: 2026-01-24 00:33:27.988 [INFO][4786] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" HandleID="k8s-pod-network.bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" Workload="ip--172--31--16--136-k8s-whisker--7f66bf4696--t77qb-eth0" Jan 24 00:33:28.031120 containerd[1990]: 2026-01-24 00:33:27.992 [INFO][4786] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:28.031120 containerd[1990]: 2026-01-24 00:33:28.019 [INFO][4757] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" Jan 24 00:33:28.032727 containerd[1990]: time="2026-01-24T00:33:28.031648924Z" level=info msg="TearDown network for sandbox \"bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64\" successfully" Jan 24 00:33:28.032727 containerd[1990]: time="2026-01-24T00:33:28.031694812Z" level=info msg="StopPodSandbox for \"bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64\" returns successfully" Jan 24 00:33:28.115415 containerd[1990]: 2026-01-24 00:33:24.664 [INFO][4922] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" Jan 24 00:33:28.115415 containerd[1990]: 2026-01-24 00:33:24.665 [INFO][4922] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" iface="eth0" netns="/var/run/netns/cni-fb8a3b74-b710-5c57-3bfd-82244ba636c2" Jan 24 00:33:28.115415 containerd[1990]: 2026-01-24 00:33:24.666 [INFO][4922] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" iface="eth0" netns="/var/run/netns/cni-fb8a3b74-b710-5c57-3bfd-82244ba636c2" Jan 24 00:33:28.115415 containerd[1990]: 2026-01-24 00:33:24.667 [INFO][4922] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" iface="eth0" netns="/var/run/netns/cni-fb8a3b74-b710-5c57-3bfd-82244ba636c2" Jan 24 00:33:28.115415 containerd[1990]: 2026-01-24 00:33:24.667 [INFO][4922] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" Jan 24 00:33:28.115415 containerd[1990]: 2026-01-24 00:33:24.667 [INFO][4922] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" Jan 24 00:33:28.115415 containerd[1990]: 2026-01-24 00:33:27.817 [INFO][4934] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" HandleID="k8s-pod-network.8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" Workload="ip--172--31--16--136-k8s-coredns--674b8bbfcf--xk4gx-eth0" Jan 24 00:33:28.115415 containerd[1990]: 2026-01-24 00:33:27.821 [INFO][4934] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:28.115415 containerd[1990]: 2026-01-24 00:33:28.031 [INFO][4934] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:28.115415 containerd[1990]: 2026-01-24 00:33:28.077 [WARNING][4934] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" HandleID="k8s-pod-network.8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" Workload="ip--172--31--16--136-k8s-coredns--674b8bbfcf--xk4gx-eth0" Jan 24 00:33:28.115415 containerd[1990]: 2026-01-24 00:33:28.079 [INFO][4934] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" HandleID="k8s-pod-network.8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" Workload="ip--172--31--16--136-k8s-coredns--674b8bbfcf--xk4gx-eth0" Jan 24 00:33:28.115415 containerd[1990]: 2026-01-24 00:33:28.087 [INFO][4934] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:28.115415 containerd[1990]: 2026-01-24 00:33:28.098 [INFO][4922] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" Jan 24 00:33:28.115415 containerd[1990]: time="2026-01-24T00:33:28.115073130Z" level=info msg="TearDown network for sandbox \"8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35\" successfully" Jan 24 00:33:28.115415 containerd[1990]: time="2026-01-24T00:33:28.115103170Z" level=info msg="StopPodSandbox for \"8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35\" returns successfully" Jan 24 00:33:28.115415 containerd[1990]: 2026-01-24 00:33:23.666 [INFO][4892] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" Jan 24 00:33:28.115415 containerd[1990]: 2026-01-24 00:33:23.667 [INFO][4892] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" iface="eth0" netns="/var/run/netns/cni-139db554-44b5-ec50-0bc7-7d5c0cea8c5c" Jan 24 00:33:28.115415 containerd[1990]: 2026-01-24 00:33:23.668 [INFO][4892] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" iface="eth0" netns="/var/run/netns/cni-139db554-44b5-ec50-0bc7-7d5c0cea8c5c" Jan 24 00:33:28.115415 containerd[1990]: 2026-01-24 00:33:23.668 [INFO][4892] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" iface="eth0" netns="/var/run/netns/cni-139db554-44b5-ec50-0bc7-7d5c0cea8c5c" Jan 24 00:33:28.115415 containerd[1990]: 2026-01-24 00:33:23.668 [INFO][4892] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" Jan 24 00:33:28.115415 containerd[1990]: 2026-01-24 00:33:23.668 [INFO][4892] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" Jan 24 00:33:28.115415 containerd[1990]: 2026-01-24 00:33:27.818 [INFO][4899] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" HandleID="k8s-pod-network.d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" Workload="ip--172--31--16--136-k8s-csi--node--driver--zgckl-eth0" Jan 24 00:33:28.115415 containerd[1990]: 2026-01-24 00:33:27.820 [INFO][4899] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:28.115415 containerd[1990]: 2026-01-24 00:33:27.998 [INFO][4899] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:28.115415 containerd[1990]: 2026-01-24 00:33:28.026 [WARNING][4899] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" HandleID="k8s-pod-network.d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" Workload="ip--172--31--16--136-k8s-csi--node--driver--zgckl-eth0" Jan 24 00:33:28.115415 containerd[1990]: 2026-01-24 00:33:28.026 [INFO][4899] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" HandleID="k8s-pod-network.d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" Workload="ip--172--31--16--136-k8s-csi--node--driver--zgckl-eth0" Jan 24 00:33:28.115415 containerd[1990]: 2026-01-24 00:33:28.031 [INFO][4899] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:28.115415 containerd[1990]: 2026-01-24 00:33:28.074 [INFO][4892] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" Jan 24 00:33:28.117565 containerd[1990]: time="2026-01-24T00:33:28.115614101Z" level=info msg="TearDown network for sandbox \"d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c\" successfully" Jan 24 00:33:28.117565 containerd[1990]: time="2026-01-24T00:33:28.115637685Z" level=info msg="StopPodSandbox for \"d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c\" returns successfully" Jan 24 00:33:28.123481 containerd[1990]: time="2026-01-24T00:33:28.123366735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xk4gx,Uid:e6c717e9-efbf-49cc-b817-198682317a0f,Namespace:kube-system,Attempt:1,}" Jan 24 00:33:28.131306 containerd[1990]: time="2026-01-24T00:33:28.130956337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zgckl,Uid:6dea86f8-2783-4942-8476-4f769af7b22d,Namespace:calico-system,Attempt:1,}" Jan 24 00:33:28.172166 kubelet[3194]: I0124 00:33:28.171900 3194 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0ff43df-473e-4ec1-a924-95f6d305484f-whisker-ca-bundle\") pod \"e0ff43df-473e-4ec1-a924-95f6d305484f\" (UID: \"e0ff43df-473e-4ec1-a924-95f6d305484f\") " Jan 24 00:33:28.172166 kubelet[3194]: I0124 00:33:28.172009 3194 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4x9v\" (UniqueName: \"kubernetes.io/projected/e0ff43df-473e-4ec1-a924-95f6d305484f-kube-api-access-r4x9v\") pod \"e0ff43df-473e-4ec1-a924-95f6d305484f\" (UID: \"e0ff43df-473e-4ec1-a924-95f6d305484f\") " Jan 24 00:33:28.172166 kubelet[3194]: I0124 00:33:28.172072 3194 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e0ff43df-473e-4ec1-a924-95f6d305484f-whisker-backend-key-pair\") pod \"e0ff43df-473e-4ec1-a924-95f6d305484f\" (UID: \"e0ff43df-473e-4ec1-a924-95f6d305484f\") " Jan 24 00:33:28.207467 kubelet[3194]: I0124 00:33:28.207192 3194 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0ff43df-473e-4ec1-a924-95f6d305484f-kube-api-access-r4x9v" (OuterVolumeSpecName: "kube-api-access-r4x9v") pod "e0ff43df-473e-4ec1-a924-95f6d305484f" (UID: "e0ff43df-473e-4ec1-a924-95f6d305484f"). InnerVolumeSpecName "kube-api-access-r4x9v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:33:28.214917 kubelet[3194]: I0124 00:33:28.201097 3194 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0ff43df-473e-4ec1-a924-95f6d305484f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e0ff43df-473e-4ec1-a924-95f6d305484f" (UID: "e0ff43df-473e-4ec1-a924-95f6d305484f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 00:33:28.216202 kubelet[3194]: I0124 00:33:28.216159 3194 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0ff43df-473e-4ec1-a924-95f6d305484f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e0ff43df-473e-4ec1-a924-95f6d305484f" (UID: "e0ff43df-473e-4ec1-a924-95f6d305484f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 00:33:28.274323 kubelet[3194]: I0124 00:33:28.273039 3194 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e0ff43df-473e-4ec1-a924-95f6d305484f-whisker-backend-key-pair\") on node \"ip-172-31-16-136\" DevicePath \"\"" Jan 24 00:33:28.274323 kubelet[3194]: I0124 00:33:28.273079 3194 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0ff43df-473e-4ec1-a924-95f6d305484f-whisker-ca-bundle\") on node \"ip-172-31-16-136\" DevicePath \"\"" Jan 24 00:33:28.274323 kubelet[3194]: I0124 00:33:28.273096 3194 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r4x9v\" (UniqueName: \"kubernetes.io/projected/e0ff43df-473e-4ec1-a924-95f6d305484f-kube-api-access-r4x9v\") on node \"ip-172-31-16-136\" DevicePath \"\"" Jan 24 00:33:28.735363 sshd[5025]: pam_unix(sshd:session): session closed for user core Jan 24 00:33:28.740340 systemd[1]: sshd@8-172.31.16.136:22-4.153.228.146:55738.service: Deactivated successfully. Jan 24 00:33:28.743797 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 00:33:28.744983 systemd-logind[1960]: Session 9 logged out. Waiting for processes to exit. Jan 24 00:33:28.748114 systemd-logind[1960]: Removed session 9. Jan 24 00:33:28.753790 systemd-networkd[1622]: calibbb1761c22f: Link UP Jan 24 00:33:28.755847 systemd-networkd[1622]: calibbb1761c22f: Gained carrier Jan 24 00:33:28.756733 (udev-worker)[5183]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:33:28.784567 containerd[1990]: 2026-01-24 00:33:28.214 [INFO][5056] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--136-k8s-calico--apiserver--59955b8999--7njzz-eth0 calico-apiserver-59955b8999- calico-apiserver 8bd1b1e2-6c2f-496d-84df-3687f4a4a992 991 0 2026-01-24 00:32:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59955b8999 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-136 calico-apiserver-59955b8999-7njzz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibbb1761c22f [] [] }} ContainerID="e06dc426c84c8c019b9dd7cc6d1906d0159aa9563c58baf13cbd1bd7c4ef6f5b" Namespace="calico-apiserver" Pod="calico-apiserver-59955b8999-7njzz" WorkloadEndpoint="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--7njzz-" Jan 24 00:33:28.784567 containerd[1990]: 2026-01-24 00:33:28.216 [INFO][5056] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e06dc426c84c8c019b9dd7cc6d1906d0159aa9563c58baf13cbd1bd7c4ef6f5b" Namespace="calico-apiserver" Pod="calico-apiserver-59955b8999-7njzz" WorkloadEndpoint="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--7njzz-eth0" Jan 24 00:33:28.784567 containerd[1990]: 2026-01-24 00:33:28.438 [INFO][5116] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e06dc426c84c8c019b9dd7cc6d1906d0159aa9563c58baf13cbd1bd7c4ef6f5b" HandleID="k8s-pod-network.e06dc426c84c8c019b9dd7cc6d1906d0159aa9563c58baf13cbd1bd7c4ef6f5b" Workload="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--7njzz-eth0" Jan 24 00:33:28.784567 containerd[1990]: 2026-01-24 00:33:28.438 [INFO][5116] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e06dc426c84c8c019b9dd7cc6d1906d0159aa9563c58baf13cbd1bd7c4ef6f5b" HandleID="k8s-pod-network.e06dc426c84c8c019b9dd7cc6d1906d0159aa9563c58baf13cbd1bd7c4ef6f5b" Workload="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--7njzz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000387180), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-16-136", "pod":"calico-apiserver-59955b8999-7njzz", "timestamp":"2026-01-24 00:33:28.438274863 +0000 UTC"}, Hostname:"ip-172-31-16-136", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:33:28.784567 containerd[1990]: 2026-01-24 00:33:28.439 [INFO][5116] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:28.784567 containerd[1990]: 2026-01-24 00:33:28.440 [INFO][5116] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:28.784567 containerd[1990]: 2026-01-24 00:33:28.440 [INFO][5116] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-136' Jan 24 00:33:28.784567 containerd[1990]: 2026-01-24 00:33:28.481 [INFO][5116] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e06dc426c84c8c019b9dd7cc6d1906d0159aa9563c58baf13cbd1bd7c4ef6f5b" host="ip-172-31-16-136" Jan 24 00:33:28.784567 containerd[1990]: 2026-01-24 00:33:28.715 [INFO][5116] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-136" Jan 24 00:33:28.784567 containerd[1990]: 2026-01-24 00:33:28.719 [INFO][5116] ipam/ipam.go 511: Trying affinity for 192.168.10.64/26 host="ip-172-31-16-136" Jan 24 00:33:28.784567 containerd[1990]: 2026-01-24 00:33:28.721 [INFO][5116] ipam/ipam.go 158: Attempting to load block cidr=192.168.10.64/26 host="ip-172-31-16-136" Jan 24 00:33:28.784567 containerd[1990]: 2026-01-24 00:33:28.724 [INFO][5116] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.10.64/26 host="ip-172-31-16-136" Jan 24 00:33:28.784567 containerd[1990]: 2026-01-24 00:33:28.724 [INFO][5116] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.10.64/26 handle="k8s-pod-network.e06dc426c84c8c019b9dd7cc6d1906d0159aa9563c58baf13cbd1bd7c4ef6f5b" host="ip-172-31-16-136" Jan 24 00:33:28.784567 containerd[1990]: 2026-01-24 00:33:28.725 [INFO][5116] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e06dc426c84c8c019b9dd7cc6d1906d0159aa9563c58baf13cbd1bd7c4ef6f5b Jan 24 00:33:28.784567 containerd[1990]: 2026-01-24 00:33:28.732 [INFO][5116] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.10.64/26 handle="k8s-pod-network.e06dc426c84c8c019b9dd7cc6d1906d0159aa9563c58baf13cbd1bd7c4ef6f5b" host="ip-172-31-16-136" Jan 24 00:33:28.784567 containerd[1990]: 2026-01-24 00:33:28.738 [INFO][5116] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.10.65/26] block=192.168.10.64/26 handle="k8s-pod-network.e06dc426c84c8c019b9dd7cc6d1906d0159aa9563c58baf13cbd1bd7c4ef6f5b" host="ip-172-31-16-136" Jan 24 00:33:28.784567 containerd[1990]: 2026-01-24 00:33:28.739 [INFO][5116] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.10.65/26] handle="k8s-pod-network.e06dc426c84c8c019b9dd7cc6d1906d0159aa9563c58baf13cbd1bd7c4ef6f5b" host="ip-172-31-16-136" Jan 24 00:33:28.784567 containerd[1990]: 2026-01-24 00:33:28.739 [INFO][5116] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:28.784567 containerd[1990]: 2026-01-24 00:33:28.739 [INFO][5116] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.10.65/26] IPv6=[] ContainerID="e06dc426c84c8c019b9dd7cc6d1906d0159aa9563c58baf13cbd1bd7c4ef6f5b" HandleID="k8s-pod-network.e06dc426c84c8c019b9dd7cc6d1906d0159aa9563c58baf13cbd1bd7c4ef6f5b" Workload="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--7njzz-eth0" Jan 24 00:33:28.785764 containerd[1990]: 2026-01-24 00:33:28.747 [INFO][5056] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e06dc426c84c8c019b9dd7cc6d1906d0159aa9563c58baf13cbd1bd7c4ef6f5b" Namespace="calico-apiserver" Pod="calico-apiserver-59955b8999-7njzz" WorkloadEndpoint="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--7njzz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--136-k8s-calico--apiserver--59955b8999--7njzz-eth0", GenerateName:"calico-apiserver-59955b8999-", Namespace:"calico-apiserver", SelfLink:"", UID:"8bd1b1e2-6c2f-496d-84df-3687f4a4a992", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59955b8999", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-136", ContainerID:"", Pod:"calico-apiserver-59955b8999-7njzz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibbb1761c22f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:28.785764 containerd[1990]: 2026-01-24 00:33:28.747 [INFO][5056] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.65/32] ContainerID="e06dc426c84c8c019b9dd7cc6d1906d0159aa9563c58baf13cbd1bd7c4ef6f5b" Namespace="calico-apiserver" Pod="calico-apiserver-59955b8999-7njzz" WorkloadEndpoint="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--7njzz-eth0" Jan 24 00:33:28.785764 containerd[1990]: 2026-01-24 00:33:28.748 [INFO][5056] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibbb1761c22f ContainerID="e06dc426c84c8c019b9dd7cc6d1906d0159aa9563c58baf13cbd1bd7c4ef6f5b" Namespace="calico-apiserver" Pod="calico-apiserver-59955b8999-7njzz" WorkloadEndpoint="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--7njzz-eth0" Jan 24 00:33:28.785764 containerd[1990]: 2026-01-24 00:33:28.757 [INFO][5056] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e06dc426c84c8c019b9dd7cc6d1906d0159aa9563c58baf13cbd1bd7c4ef6f5b" Namespace="calico-apiserver" Pod="calico-apiserver-59955b8999-7njzz" WorkloadEndpoint="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--7njzz-eth0" Jan 24 00:33:28.785764 containerd[1990]: 2026-01-24 00:33:28.758 [INFO][5056] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e06dc426c84c8c019b9dd7cc6d1906d0159aa9563c58baf13cbd1bd7c4ef6f5b" Namespace="calico-apiserver" Pod="calico-apiserver-59955b8999-7njzz" WorkloadEndpoint="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--7njzz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--136-k8s-calico--apiserver--59955b8999--7njzz-eth0", GenerateName:"calico-apiserver-59955b8999-", Namespace:"calico-apiserver", SelfLink:"", UID:"8bd1b1e2-6c2f-496d-84df-3687f4a4a992", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59955b8999", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-136", ContainerID:"e06dc426c84c8c019b9dd7cc6d1906d0159aa9563c58baf13cbd1bd7c4ef6f5b", Pod:"calico-apiserver-59955b8999-7njzz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibbb1761c22f", MAC:"da:fa:87:15:58:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:28.785764 containerd[1990]: 2026-01-24 00:33:28.782 [INFO][5056] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e06dc426c84c8c019b9dd7cc6d1906d0159aa9563c58baf13cbd1bd7c4ef6f5b" Namespace="calico-apiserver" Pod="calico-apiserver-59955b8999-7njzz" WorkloadEndpoint="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--7njzz-eth0" Jan 24 00:33:28.857876 systemd-networkd[1622]: calif2ff9cc824b: Link UP Jan 24 00:33:28.860049 (udev-worker)[5185]: Network interface NamePolicy= disabled on kernel command line. Jan 24 00:33:28.864323 systemd-networkd[1622]: calif2ff9cc824b: Gained carrier Jan 24 00:33:28.866703 systemd[1]: run-netns-cni\x2d97ba9d28\x2d3eed\x2dd42b\x2d2cf7\x2d84fe03577cf3.mount: Deactivated successfully. Jan 24 00:33:28.866812 systemd[1]: run-netns-cni\x2d1a153312\x2df898\x2db438\x2dd35b\x2daef494e27e72.mount: Deactivated successfully. Jan 24 00:33:28.866866 systemd[1]: run-netns-cni\x2dfb8a3b74\x2db710\x2d5c57\x2d3bfd\x2d82244ba636c2.mount: Deactivated successfully. Jan 24 00:33:28.866922 systemd[1]: var-lib-kubelet-pods-e0ff43df\x2d473e\x2d4ec1\x2da924\x2d95f6d305484f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr4x9v.mount: Deactivated successfully. Jan 24 00:33:28.866988 systemd[1]: var-lib-kubelet-pods-e0ff43df\x2d473e\x2d4ec1\x2da924\x2d95f6d305484f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 24 00:33:28.867045 systemd[1]: run-netns-cni\x2d139db554\x2d44b5\x2dec50\x2d0bc7\x2d7d5c0cea8c5c.mount: Deactivated successfully. Jan 24 00:33:28.896738 containerd[1990]: 2026-01-24 00:33:28.195 [INFO][5034] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--136-k8s-coredns--674b8bbfcf--vpf5l-eth0 coredns-674b8bbfcf- kube-system 8d604c20-234e-4790-af46-e3ccb6ebbab2 1022 0 2026-01-24 00:32:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-136 coredns-674b8bbfcf-vpf5l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif2ff9cc824b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5b942064ddb4524e1b9ed9fa151771316a65a0214b8aa0d8040c8281f52d70da" Namespace="kube-system" Pod="coredns-674b8bbfcf-vpf5l" WorkloadEndpoint="ip--172--31--16--136-k8s-coredns--674b8bbfcf--vpf5l-" Jan 24 00:33:28.896738 containerd[1990]: 2026-01-24 00:33:28.197 [INFO][5034] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5b942064ddb4524e1b9ed9fa151771316a65a0214b8aa0d8040c8281f52d70da" Namespace="kube-system" Pod="coredns-674b8bbfcf-vpf5l" WorkloadEndpoint="ip--172--31--16--136-k8s-coredns--674b8bbfcf--vpf5l-eth0" Jan 24 00:33:28.896738 containerd[1990]: 2026-01-24 00:33:28.558 [INFO][5094] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5b942064ddb4524e1b9ed9fa151771316a65a0214b8aa0d8040c8281f52d70da" HandleID="k8s-pod-network.5b942064ddb4524e1b9ed9fa151771316a65a0214b8aa0d8040c8281f52d70da" Workload="ip--172--31--16--136-k8s-coredns--674b8bbfcf--vpf5l-eth0" Jan 24 00:33:28.896738 containerd[1990]: 2026-01-24 00:33:28.562 [INFO][5094] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5b942064ddb4524e1b9ed9fa151771316a65a0214b8aa0d8040c8281f52d70da" HandleID="k8s-pod-network.5b942064ddb4524e1b9ed9fa151771316a65a0214b8aa0d8040c8281f52d70da" Workload="ip--172--31--16--136-k8s-coredns--674b8bbfcf--vpf5l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f740), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-136", "pod":"coredns-674b8bbfcf-vpf5l", "timestamp":"2026-01-24 00:33:28.558470182 +0000 UTC"}, Hostname:"ip-172-31-16-136", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:33:28.896738 containerd[1990]: 2026-01-24 00:33:28.567 [INFO][5094] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:28.896738 containerd[1990]: 2026-01-24 00:33:28.739 [INFO][5094] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:28.896738 containerd[1990]: 2026-01-24 00:33:28.740 [INFO][5094] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-136' Jan 24 00:33:28.896738 containerd[1990]: 2026-01-24 00:33:28.755 [INFO][5094] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5b942064ddb4524e1b9ed9fa151771316a65a0214b8aa0d8040c8281f52d70da" host="ip-172-31-16-136" Jan 24 00:33:28.896738 containerd[1990]: 2026-01-24 00:33:28.815 [INFO][5094] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-136" Jan 24 00:33:28.896738 containerd[1990]: 2026-01-24 00:33:28.822 [INFO][5094] ipam/ipam.go 511: Trying affinity for 192.168.10.64/26 host="ip-172-31-16-136" Jan 24 00:33:28.896738 containerd[1990]: 2026-01-24 00:33:28.825 [INFO][5094] ipam/ipam.go 158: Attempting to load block cidr=192.168.10.64/26 host="ip-172-31-16-136" Jan 24 00:33:28.896738 containerd[1990]: 2026-01-24 00:33:28.827 [INFO][5094] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.10.64/26 host="ip-172-31-16-136" Jan 24 00:33:28.896738 containerd[1990]: 2026-01-24 00:33:28.827 [INFO][5094] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.10.64/26 handle="k8s-pod-network.5b942064ddb4524e1b9ed9fa151771316a65a0214b8aa0d8040c8281f52d70da" host="ip-172-31-16-136" Jan 24 00:33:28.896738 containerd[1990]: 2026-01-24 00:33:28.829 [INFO][5094] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5b942064ddb4524e1b9ed9fa151771316a65a0214b8aa0d8040c8281f52d70da Jan 24 00:33:28.896738 containerd[1990]: 2026-01-24 00:33:28.834 [INFO][5094] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.10.64/26 handle="k8s-pod-network.5b942064ddb4524e1b9ed9fa151771316a65a0214b8aa0d8040c8281f52d70da" host="ip-172-31-16-136" Jan 24 00:33:28.896738 containerd[1990]: 2026-01-24 00:33:28.840 [INFO][5094] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.10.66/26] block=192.168.10.64/26 handle="k8s-pod-network.5b942064ddb4524e1b9ed9fa151771316a65a0214b8aa0d8040c8281f52d70da" host="ip-172-31-16-136" Jan 24 00:33:28.896738 containerd[1990]: 2026-01-24 00:33:28.841 [INFO][5094] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.10.66/26] handle="k8s-pod-network.5b942064ddb4524e1b9ed9fa151771316a65a0214b8aa0d8040c8281f52d70da" host="ip-172-31-16-136" Jan 24 00:33:28.896738 containerd[1990]: 2026-01-24 00:33:28.843 [INFO][5094] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:28.896738 containerd[1990]: 2026-01-24 00:33:28.843 [INFO][5094] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.10.66/26] IPv6=[] ContainerID="5b942064ddb4524e1b9ed9fa151771316a65a0214b8aa0d8040c8281f52d70da" HandleID="k8s-pod-network.5b942064ddb4524e1b9ed9fa151771316a65a0214b8aa0d8040c8281f52d70da" Workload="ip--172--31--16--136-k8s-coredns--674b8bbfcf--vpf5l-eth0" Jan 24 00:33:28.898890 containerd[1990]: 2026-01-24 00:33:28.852 [INFO][5034] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5b942064ddb4524e1b9ed9fa151771316a65a0214b8aa0d8040c8281f52d70da" Namespace="kube-system" Pod="coredns-674b8bbfcf-vpf5l" WorkloadEndpoint="ip--172--31--16--136-k8s-coredns--674b8bbfcf--vpf5l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--136-k8s-coredns--674b8bbfcf--vpf5l-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8d604c20-234e-4790-af46-e3ccb6ebbab2", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-136", ContainerID:"", Pod:"coredns-674b8bbfcf-vpf5l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2ff9cc824b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:28.898890 containerd[1990]: 2026-01-24 00:33:28.852 [INFO][5034] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.66/32] ContainerID="5b942064ddb4524e1b9ed9fa151771316a65a0214b8aa0d8040c8281f52d70da" Namespace="kube-system" Pod="coredns-674b8bbfcf-vpf5l" WorkloadEndpoint="ip--172--31--16--136-k8s-coredns--674b8bbfcf--vpf5l-eth0" Jan 24 00:33:28.898890 containerd[1990]: 2026-01-24 00:33:28.852 [INFO][5034] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2ff9cc824b ContainerID="5b942064ddb4524e1b9ed9fa151771316a65a0214b8aa0d8040c8281f52d70da" Namespace="kube-system" Pod="coredns-674b8bbfcf-vpf5l" WorkloadEndpoint="ip--172--31--16--136-k8s-coredns--674b8bbfcf--vpf5l-eth0" Jan 24 00:33:28.898890 containerd[1990]: 2026-01-24 00:33:28.856 [INFO][5034] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5b942064ddb4524e1b9ed9fa151771316a65a0214b8aa0d8040c8281f52d70da" Namespace="kube-system" Pod="coredns-674b8bbfcf-vpf5l" WorkloadEndpoint="ip--172--31--16--136-k8s-coredns--674b8bbfcf--vpf5l-eth0" Jan 24 00:33:28.898890 containerd[1990]: 2026-01-24 00:33:28.856 [INFO][5034] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5b942064ddb4524e1b9ed9fa151771316a65a0214b8aa0d8040c8281f52d70da" Namespace="kube-system" Pod="coredns-674b8bbfcf-vpf5l" WorkloadEndpoint="ip--172--31--16--136-k8s-coredns--674b8bbfcf--vpf5l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--136-k8s-coredns--674b8bbfcf--vpf5l-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8d604c20-234e-4790-af46-e3ccb6ebbab2", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-136", ContainerID:"5b942064ddb4524e1b9ed9fa151771316a65a0214b8aa0d8040c8281f52d70da", Pod:"coredns-674b8bbfcf-vpf5l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2ff9cc824b", MAC:"86:de:ff:63:c3:70", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:28.898890 containerd[1990]: 2026-01-24 00:33:28.885 [INFO][5034] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5b942064ddb4524e1b9ed9fa151771316a65a0214b8aa0d8040c8281f52d70da" Namespace="kube-system" Pod="coredns-674b8bbfcf-vpf5l" WorkloadEndpoint="ip--172--31--16--136-k8s-coredns--674b8bbfcf--vpf5l-eth0" Jan 24 00:33:28.973205 systemd-networkd[1622]: calib7619293cb5: Link UP Jan 24 00:33:28.981175 systemd-networkd[1622]: calib7619293cb5: Gained carrier Jan 24 00:33:29.014234 containerd[1990]: time="2026-01-24T00:33:29.010485770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:33:29.014234 containerd[1990]: time="2026-01-24T00:33:29.012621239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:33:29.014234 containerd[1990]: time="2026-01-24T00:33:29.012635078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:29.014234 containerd[1990]: time="2026-01-24T00:33:29.012724449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:29.022408 containerd[1990]: 2026-01-24 00:33:28.212 [INFO][5035] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--136-k8s-calico--kube--controllers--777f8fb74--d8qgp-eth0 calico-kube-controllers-777f8fb74- calico-system 0c7963cb-5f76-453d-b9ca-f28ed3f17ce0 1003 0 2026-01-24 00:32:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:777f8fb74 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-16-136 calico-kube-controllers-777f8fb74-d8qgp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib7619293cb5 [] [] }} ContainerID="6ab6a55cae1e79f10a29a40395711ddb857b9a3d5cc5d8d962bf0f20e00c293c" Namespace="calico-system" Pod="calico-kube-controllers-777f8fb74-d8qgp" WorkloadEndpoint="ip--172--31--16--136-k8s-calico--kube--controllers--777f8fb74--d8qgp-" Jan 24 00:33:29.022408 containerd[1990]: 2026-01-24 00:33:28.212 [INFO][5035] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ab6a55cae1e79f10a29a40395711ddb857b9a3d5cc5d8d962bf0f20e00c293c" Namespace="calico-system" Pod="calico-kube-controllers-777f8fb74-d8qgp" WorkloadEndpoint="ip--172--31--16--136-k8s-calico--kube--controllers--777f8fb74--d8qgp-eth0" Jan 24 00:33:29.022408 containerd[1990]: 2026-01-24 00:33:28.591 [INFO][5125] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ab6a55cae1e79f10a29a40395711ddb857b9a3d5cc5d8d962bf0f20e00c293c" HandleID="k8s-pod-network.6ab6a55cae1e79f10a29a40395711ddb857b9a3d5cc5d8d962bf0f20e00c293c" Workload="ip--172--31--16--136-k8s-calico--kube--controllers--777f8fb74--d8qgp-eth0" Jan 24 00:33:29.022408 containerd[1990]: 2026-01-24 00:33:28.604 [INFO][5125] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6ab6a55cae1e79f10a29a40395711ddb857b9a3d5cc5d8d962bf0f20e00c293c" HandleID="k8s-pod-network.6ab6a55cae1e79f10a29a40395711ddb857b9a3d5cc5d8d962bf0f20e00c293c" Workload="ip--172--31--16--136-k8s-calico--kube--controllers--777f8fb74--d8qgp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001231c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-136", "pod":"calico-kube-controllers-777f8fb74-d8qgp", "timestamp":"2026-01-24 00:33:28.591519072 +0000 UTC"}, Hostname:"ip-172-31-16-136", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:33:29.022408 containerd[1990]: 2026-01-24 00:33:28.604 [INFO][5125] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:29.022408 containerd[1990]: 2026-01-24 00:33:28.841 [INFO][5125] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:29.022408 containerd[1990]: 2026-01-24 00:33:28.841 [INFO][5125] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-136' Jan 24 00:33:29.022408 containerd[1990]: 2026-01-24 00:33:28.886 [INFO][5125] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6ab6a55cae1e79f10a29a40395711ddb857b9a3d5cc5d8d962bf0f20e00c293c" host="ip-172-31-16-136" Jan 24 00:33:29.022408 containerd[1990]: 2026-01-24 00:33:28.917 [INFO][5125] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-136" Jan 24 00:33:29.022408 containerd[1990]: 2026-01-24 00:33:28.922 [INFO][5125] ipam/ipam.go 511: Trying affinity for 192.168.10.64/26 host="ip-172-31-16-136" Jan 24 00:33:29.022408 containerd[1990]: 2026-01-24 00:33:28.925 [INFO][5125] ipam/ipam.go 158: Attempting to load block cidr=192.168.10.64/26 host="ip-172-31-16-136" Jan 24 00:33:29.022408 containerd[1990]: 2026-01-24 00:33:28.928 [INFO][5125] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.10.64/26 host="ip-172-31-16-136" Jan 24 00:33:29.022408 containerd[1990]: 2026-01-24 00:33:28.928 [INFO][5125] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.10.64/26 handle="k8s-pod-network.6ab6a55cae1e79f10a29a40395711ddb857b9a3d5cc5d8d962bf0f20e00c293c" host="ip-172-31-16-136" Jan 24 00:33:29.022408 containerd[1990]: 2026-01-24 00:33:28.934 [INFO][5125] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6ab6a55cae1e79f10a29a40395711ddb857b9a3d5cc5d8d962bf0f20e00c293c Jan 24 00:33:29.022408 containerd[1990]: 2026-01-24 00:33:28.939 [INFO][5125] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.10.64/26 handle="k8s-pod-network.6ab6a55cae1e79f10a29a40395711ddb857b9a3d5cc5d8d962bf0f20e00c293c" host="ip-172-31-16-136" Jan 24 00:33:29.022408 containerd[1990]: 2026-01-24 00:33:28.950 [INFO][5125] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.10.67/26] block=192.168.10.64/26 handle="k8s-pod-network.6ab6a55cae1e79f10a29a40395711ddb857b9a3d5cc5d8d962bf0f20e00c293c" host="ip-172-31-16-136" Jan 24 00:33:29.022408 containerd[1990]: 2026-01-24 00:33:28.951 [INFO][5125] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.10.67/26] handle="k8s-pod-network.6ab6a55cae1e79f10a29a40395711ddb857b9a3d5cc5d8d962bf0f20e00c293c" host="ip-172-31-16-136" Jan 24 00:33:29.022408 containerd[1990]: 2026-01-24 00:33:28.951 [INFO][5125] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:29.022408 containerd[1990]: 2026-01-24 00:33:28.951 [INFO][5125] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.10.67/26] IPv6=[] ContainerID="6ab6a55cae1e79f10a29a40395711ddb857b9a3d5cc5d8d962bf0f20e00c293c" HandleID="k8s-pod-network.6ab6a55cae1e79f10a29a40395711ddb857b9a3d5cc5d8d962bf0f20e00c293c" Workload="ip--172--31--16--136-k8s-calico--kube--controllers--777f8fb74--d8qgp-eth0" Jan 24 00:33:29.024330 containerd[1990]: 2026-01-24 00:33:28.960 [INFO][5035] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ab6a55cae1e79f10a29a40395711ddb857b9a3d5cc5d8d962bf0f20e00c293c" Namespace="calico-system" Pod="calico-kube-controllers-777f8fb74-d8qgp" WorkloadEndpoint="ip--172--31--16--136-k8s-calico--kube--controllers--777f8fb74--d8qgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--136-k8s-calico--kube--controllers--777f8fb74--d8qgp-eth0", GenerateName:"calico-kube-controllers-777f8fb74-", Namespace:"calico-system", SelfLink:"", UID:"0c7963cb-5f76-453d-b9ca-f28ed3f17ce0", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"777f8fb74", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-136", ContainerID:"", Pod:"calico-kube-controllers-777f8fb74-d8qgp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.10.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib7619293cb5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:29.024330 containerd[1990]: 2026-01-24 00:33:28.962 [INFO][5035] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.67/32] ContainerID="6ab6a55cae1e79f10a29a40395711ddb857b9a3d5cc5d8d962bf0f20e00c293c" Namespace="calico-system" Pod="calico-kube-controllers-777f8fb74-d8qgp" WorkloadEndpoint="ip--172--31--16--136-k8s-calico--kube--controllers--777f8fb74--d8qgp-eth0" Jan 24 00:33:29.024330 containerd[1990]: 2026-01-24 00:33:28.964 [INFO][5035] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib7619293cb5 ContainerID="6ab6a55cae1e79f10a29a40395711ddb857b9a3d5cc5d8d962bf0f20e00c293c" Namespace="calico-system" Pod="calico-kube-controllers-777f8fb74-d8qgp" WorkloadEndpoint="ip--172--31--16--136-k8s-calico--kube--controllers--777f8fb74--d8qgp-eth0" Jan 24 00:33:29.024330 containerd[1990]: 2026-01-24 00:33:28.979 [INFO][5035] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ab6a55cae1e79f10a29a40395711ddb857b9a3d5cc5d8d962bf0f20e00c293c" Namespace="calico-system" Pod="calico-kube-controllers-777f8fb74-d8qgp" WorkloadEndpoint="ip--172--31--16--136-k8s-calico--kube--controllers--777f8fb74--d8qgp-eth0" Jan 24 00:33:29.024330 containerd[1990]: 2026-01-24 00:33:28.982 [INFO][5035] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ab6a55cae1e79f10a29a40395711ddb857b9a3d5cc5d8d962bf0f20e00c293c" Namespace="calico-system" Pod="calico-kube-controllers-777f8fb74-d8qgp" WorkloadEndpoint="ip--172--31--16--136-k8s-calico--kube--controllers--777f8fb74--d8qgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--136-k8s-calico--kube--controllers--777f8fb74--d8qgp-eth0", GenerateName:"calico-kube-controllers-777f8fb74-", Namespace:"calico-system", SelfLink:"", UID:"0c7963cb-5f76-453d-b9ca-f28ed3f17ce0", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"777f8fb74", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-136", ContainerID:"6ab6a55cae1e79f10a29a40395711ddb857b9a3d5cc5d8d962bf0f20e00c293c", Pod:"calico-kube-controllers-777f8fb74-d8qgp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.10.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib7619293cb5", MAC:"da:fb:2f:2f:af:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:29.024330 containerd[1990]: 2026-01-24 00:33:29.010 [INFO][5035] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ab6a55cae1e79f10a29a40395711ddb857b9a3d5cc5d8d962bf0f20e00c293c" Namespace="calico-system" Pod="calico-kube-controllers-777f8fb74-d8qgp" WorkloadEndpoint="ip--172--31--16--136-k8s-calico--kube--controllers--777f8fb74--d8qgp-eth0" Jan 24 00:33:29.108658 systemd[1]: Removed slice kubepods-besteffort-pode0ff43df_473e_4ec1_a924_95f6d305484f.slice - libcontainer container kubepods-besteffort-pode0ff43df_473e_4ec1_a924_95f6d305484f.slice. Jan 24 00:33:29.133361 containerd[1990]: time="2026-01-24T00:33:29.131126363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:33:29.133361 containerd[1990]: time="2026-01-24T00:33:29.132071019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:33:29.133361 containerd[1990]: time="2026-01-24T00:33:29.132097057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:29.133361 containerd[1990]: time="2026-01-24T00:33:29.133073410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:29.156741 systemd-networkd[1622]: cali26dd29f8fc9: Link UP Jan 24 00:33:29.159101 systemd-networkd[1622]: cali26dd29f8fc9: Gained carrier Jan 24 00:33:29.242382 systemd[1]: Started cri-containerd-e06dc426c84c8c019b9dd7cc6d1906d0159aa9563c58baf13cbd1bd7c4ef6f5b.scope - libcontainer container e06dc426c84c8c019b9dd7cc6d1906d0159aa9563c58baf13cbd1bd7c4ef6f5b. Jan 24 00:33:29.255982 containerd[1990]: 2026-01-24 00:33:28.326 [INFO][5098] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--136-k8s-coredns--674b8bbfcf--xk4gx-eth0 coredns-674b8bbfcf- kube-system e6c717e9-efbf-49cc-b817-198682317a0f 1002 0 2026-01-24 00:32:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-136 coredns-674b8bbfcf-xk4gx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali26dd29f8fc9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9a378e49ca35967023a744c58e897c8d63744827580cfa7763989d53f7a0f629" Namespace="kube-system" Pod="coredns-674b8bbfcf-xk4gx" WorkloadEndpoint="ip--172--31--16--136-k8s-coredns--674b8bbfcf--xk4gx-" Jan 24 00:33:29.255982 containerd[1990]: 2026-01-24 00:33:28.333 [INFO][5098] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9a378e49ca35967023a744c58e897c8d63744827580cfa7763989d53f7a0f629" Namespace="kube-system" Pod="coredns-674b8bbfcf-xk4gx" WorkloadEndpoint="ip--172--31--16--136-k8s-coredns--674b8bbfcf--xk4gx-eth0" Jan 24 00:33:29.255982 containerd[1990]: 2026-01-24 00:33:28.611 [INFO][5137] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a378e49ca35967023a744c58e897c8d63744827580cfa7763989d53f7a0f629" HandleID="k8s-pod-network.9a378e49ca35967023a744c58e897c8d63744827580cfa7763989d53f7a0f629" Workload="ip--172--31--16--136-k8s-coredns--674b8bbfcf--xk4gx-eth0" Jan 24 00:33:29.255982 containerd[1990]: 2026-01-24 00:33:28.611 [INFO][5137] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9a378e49ca35967023a744c58e897c8d63744827580cfa7763989d53f7a0f629" HandleID="k8s-pod-network.9a378e49ca35967023a744c58e897c8d63744827580cfa7763989d53f7a0f629" Workload="ip--172--31--16--136-k8s-coredns--674b8bbfcf--xk4gx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cbc50), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-136", "pod":"coredns-674b8bbfcf-xk4gx", "timestamp":"2026-01-24 00:33:28.611022212 +0000 UTC"}, Hostname:"ip-172-31-16-136", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:33:29.255982 containerd[1990]: 2026-01-24 00:33:28.611 [INFO][5137] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:29.255982 containerd[1990]: 2026-01-24 00:33:28.955 [INFO][5137] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:29.255982 containerd[1990]: 2026-01-24 00:33:28.955 [INFO][5137] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-136' Jan 24 00:33:29.255982 containerd[1990]: 2026-01-24 00:33:28.981 [INFO][5137] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9a378e49ca35967023a744c58e897c8d63744827580cfa7763989d53f7a0f629" host="ip-172-31-16-136" Jan 24 00:33:29.255982 containerd[1990]: 2026-01-24 00:33:29.018 [INFO][5137] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-136" Jan 24 00:33:29.255982 containerd[1990]: 2026-01-24 00:33:29.049 [INFO][5137] ipam/ipam.go 511: Trying affinity for 192.168.10.64/26 host="ip-172-31-16-136" Jan 24 00:33:29.255982 containerd[1990]: 2026-01-24 00:33:29.060 [INFO][5137] ipam/ipam.go 158: Attempting to load block cidr=192.168.10.64/26 host="ip-172-31-16-136" Jan 24 00:33:29.255982 containerd[1990]: 2026-01-24 00:33:29.069 [INFO][5137] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.10.64/26 host="ip-172-31-16-136" Jan 24 00:33:29.255982 containerd[1990]: 2026-01-24 00:33:29.069 [INFO][5137] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.10.64/26 handle="k8s-pod-network.9a378e49ca35967023a744c58e897c8d63744827580cfa7763989d53f7a0f629" host="ip-172-31-16-136" Jan 24 00:33:29.255982 containerd[1990]: 2026-01-24 00:33:29.076 [INFO][5137] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9a378e49ca35967023a744c58e897c8d63744827580cfa7763989d53f7a0f629 Jan 24 00:33:29.255982 containerd[1990]: 2026-01-24 00:33:29.097 [INFO][5137] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.10.64/26 handle="k8s-pod-network.9a378e49ca35967023a744c58e897c8d63744827580cfa7763989d53f7a0f629" host="ip-172-31-16-136" Jan 24 00:33:29.255982 containerd[1990]: 2026-01-24 00:33:29.120 [INFO][5137] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.10.68/26] block=192.168.10.64/26 handle="k8s-pod-network.9a378e49ca35967023a744c58e897c8d63744827580cfa7763989d53f7a0f629" host="ip-172-31-16-136" Jan 24 00:33:29.255982 containerd[1990]: 2026-01-24 00:33:29.122 [INFO][5137] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.10.68/26] handle="k8s-pod-network.9a378e49ca35967023a744c58e897c8d63744827580cfa7763989d53f7a0f629" host="ip-172-31-16-136" Jan 24 00:33:29.255982 containerd[1990]: 2026-01-24 00:33:29.122 [INFO][5137] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:29.255982 containerd[1990]: 2026-01-24 00:33:29.122 [INFO][5137] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.10.68/26] IPv6=[] ContainerID="9a378e49ca35967023a744c58e897c8d63744827580cfa7763989d53f7a0f629" HandleID="k8s-pod-network.9a378e49ca35967023a744c58e897c8d63744827580cfa7763989d53f7a0f629" Workload="ip--172--31--16--136-k8s-coredns--674b8bbfcf--xk4gx-eth0" Jan 24 00:33:29.257708 containerd[1990]: 2026-01-24 00:33:29.139 [INFO][5098] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9a378e49ca35967023a744c58e897c8d63744827580cfa7763989d53f7a0f629" Namespace="kube-system" Pod="coredns-674b8bbfcf-xk4gx" WorkloadEndpoint="ip--172--31--16--136-k8s-coredns--674b8bbfcf--xk4gx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--136-k8s-coredns--674b8bbfcf--xk4gx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e6c717e9-efbf-49cc-b817-198682317a0f", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-136", ContainerID:"", Pod:"coredns-674b8bbfcf-xk4gx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26dd29f8fc9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:29.257708 containerd[1990]: 2026-01-24 00:33:29.139 [INFO][5098] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.68/32] ContainerID="9a378e49ca35967023a744c58e897c8d63744827580cfa7763989d53f7a0f629" Namespace="kube-system" Pod="coredns-674b8bbfcf-xk4gx" WorkloadEndpoint="ip--172--31--16--136-k8s-coredns--674b8bbfcf--xk4gx-eth0" Jan 24 00:33:29.257708 containerd[1990]: 2026-01-24 00:33:29.140 [INFO][5098] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26dd29f8fc9 ContainerID="9a378e49ca35967023a744c58e897c8d63744827580cfa7763989d53f7a0f629" Namespace="kube-system" Pod="coredns-674b8bbfcf-xk4gx" WorkloadEndpoint="ip--172--31--16--136-k8s-coredns--674b8bbfcf--xk4gx-eth0" Jan 24 00:33:29.257708 containerd[1990]: 2026-01-24 00:33:29.159 [INFO][5098] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a378e49ca35967023a744c58e897c8d63744827580cfa7763989d53f7a0f629" Namespace="kube-system" Pod="coredns-674b8bbfcf-xk4gx" WorkloadEndpoint="ip--172--31--16--136-k8s-coredns--674b8bbfcf--xk4gx-eth0" Jan 24 00:33:29.257708 containerd[1990]: 2026-01-24 00:33:29.174 [INFO][5098] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9a378e49ca35967023a744c58e897c8d63744827580cfa7763989d53f7a0f629" Namespace="kube-system" Pod="coredns-674b8bbfcf-xk4gx" WorkloadEndpoint="ip--172--31--16--136-k8s-coredns--674b8bbfcf--xk4gx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--136-k8s-coredns--674b8bbfcf--xk4gx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e6c717e9-efbf-49cc-b817-198682317a0f", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-136", ContainerID:"9a378e49ca35967023a744c58e897c8d63744827580cfa7763989d53f7a0f629", Pod:"coredns-674b8bbfcf-xk4gx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26dd29f8fc9", MAC:"82:62:6f:51:5f:a0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:29.257708 containerd[1990]: 2026-01-24 00:33:29.235 [INFO][5098] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9a378e49ca35967023a744c58e897c8d63744827580cfa7763989d53f7a0f629" Namespace="kube-system" Pod="coredns-674b8bbfcf-xk4gx" WorkloadEndpoint="ip--172--31--16--136-k8s-coredns--674b8bbfcf--xk4gx-eth0" Jan 24 00:33:29.268761 systemd[1]: Started cri-containerd-5b942064ddb4524e1b9ed9fa151771316a65a0214b8aa0d8040c8281f52d70da.scope - libcontainer container 5b942064ddb4524e1b9ed9fa151771316a65a0214b8aa0d8040c8281f52d70da. Jan 24 00:33:29.316514 containerd[1990]: time="2026-01-24T00:33:29.315033329Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:33:29.316514 containerd[1990]: time="2026-01-24T00:33:29.315104715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:33:29.316514 containerd[1990]: time="2026-01-24T00:33:29.315161219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:29.316514 containerd[1990]: time="2026-01-24T00:33:29.315268035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:29.375948 systemd-networkd[1622]: cali588e7564342: Link UP Jan 24 00:33:29.376846 systemd-networkd[1622]: cali588e7564342: Gained carrier Jan 24 00:33:29.437430 containerd[1990]: time="2026-01-24T00:33:29.437005008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:33:29.437846 systemd[1]: Started cri-containerd-6ab6a55cae1e79f10a29a40395711ddb857b9a3d5cc5d8d962bf0f20e00c293c.scope - libcontainer container 6ab6a55cae1e79f10a29a40395711ddb857b9a3d5cc5d8d962bf0f20e00c293c. Jan 24 00:33:29.455982 containerd[1990]: 2026-01-24 00:33:28.428 [INFO][5069] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--136-k8s-calico--apiserver--59955b8999--49mgw-eth0 calico-apiserver-59955b8999- calico-apiserver 44356a3b-6e7e-4852-a5bd-fffe6e033ca3 1020 0 2026-01-24 00:32:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59955b8999 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-136 calico-apiserver-59955b8999-49mgw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali588e7564342 [] [] }} ContainerID="10d3f659a483a654ce16debdc7e16476c66622df2e8565813df97820864bd97e" Namespace="calico-apiserver" Pod="calico-apiserver-59955b8999-49mgw" WorkloadEndpoint="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--49mgw-" Jan 24 00:33:29.455982 containerd[1990]: 2026-01-24 00:33:28.428 [INFO][5069] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="10d3f659a483a654ce16debdc7e16476c66622df2e8565813df97820864bd97e" Namespace="calico-apiserver" Pod="calico-apiserver-59955b8999-49mgw" WorkloadEndpoint="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--49mgw-eth0" Jan 24 00:33:29.455982 containerd[1990]: 2026-01-24 00:33:28.663 [INFO][5151] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10d3f659a483a654ce16debdc7e16476c66622df2e8565813df97820864bd97e" HandleID="k8s-pod-network.10d3f659a483a654ce16debdc7e16476c66622df2e8565813df97820864bd97e" Workload="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--49mgw-eth0" Jan 24 00:33:29.455982 containerd[1990]: 2026-01-24 00:33:28.664 [INFO][5151] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="10d3f659a483a654ce16debdc7e16476c66622df2e8565813df97820864bd97e" HandleID="k8s-pod-network.10d3f659a483a654ce16debdc7e16476c66622df2e8565813df97820864bd97e" Workload="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--49mgw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001238c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-16-136", "pod":"calico-apiserver-59955b8999-49mgw", "timestamp":"2026-01-24 00:33:28.663250364 +0000 UTC"}, Hostname:"ip-172-31-16-136", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:33:29.455982 containerd[1990]: 2026-01-24 00:33:28.664 [INFO][5151] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:29.455982 containerd[1990]: 2026-01-24 00:33:29.126 [INFO][5151] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:29.455982 containerd[1990]: 2026-01-24 00:33:29.126 [INFO][5151] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-136' Jan 24 00:33:29.455982 containerd[1990]: 2026-01-24 00:33:29.175 [INFO][5151] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.10d3f659a483a654ce16debdc7e16476c66622df2e8565813df97820864bd97e" host="ip-172-31-16-136" Jan 24 00:33:29.455982 containerd[1990]: 2026-01-24 00:33:29.240 [INFO][5151] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-136" Jan 24 00:33:29.455982 containerd[1990]: 2026-01-24 00:33:29.283 [INFO][5151] ipam/ipam.go 511: Trying affinity for 192.168.10.64/26 host="ip-172-31-16-136" Jan 24 00:33:29.455982 containerd[1990]: 2026-01-24 00:33:29.287 [INFO][5151] ipam/ipam.go 158: Attempting to load block cidr=192.168.10.64/26 host="ip-172-31-16-136" Jan 24 00:33:29.455982 containerd[1990]: 2026-01-24 00:33:29.290 [INFO][5151] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.10.64/26 host="ip-172-31-16-136" Jan 24 00:33:29.455982 containerd[1990]: 2026-01-24 00:33:29.291 [INFO][5151] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.10.64/26 handle="k8s-pod-network.10d3f659a483a654ce16debdc7e16476c66622df2e8565813df97820864bd97e" host="ip-172-31-16-136" Jan 24 00:33:29.455982 containerd[1990]: 2026-01-24 00:33:29.294 [INFO][5151] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.10d3f659a483a654ce16debdc7e16476c66622df2e8565813df97820864bd97e Jan 24 00:33:29.455982 containerd[1990]: 2026-01-24 00:33:29.304 [INFO][5151] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.10.64/26 handle="k8s-pod-network.10d3f659a483a654ce16debdc7e16476c66622df2e8565813df97820864bd97e" host="ip-172-31-16-136" Jan 24 00:33:29.455982 containerd[1990]: 2026-01-24 00:33:29.316 [INFO][5151] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.10.69/26] block=192.168.10.64/26 handle="k8s-pod-network.10d3f659a483a654ce16debdc7e16476c66622df2e8565813df97820864bd97e" host="ip-172-31-16-136" Jan 24 00:33:29.455982 containerd[1990]: 2026-01-24 00:33:29.316 [INFO][5151] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.10.69/26] handle="k8s-pod-network.10d3f659a483a654ce16debdc7e16476c66622df2e8565813df97820864bd97e" host="ip-172-31-16-136" Jan 24 00:33:29.455982 containerd[1990]: 2026-01-24 00:33:29.316 [INFO][5151] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:29.455982 containerd[1990]: 2026-01-24 00:33:29.316 [INFO][5151] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.10.69/26] IPv6=[] ContainerID="10d3f659a483a654ce16debdc7e16476c66622df2e8565813df97820864bd97e" HandleID="k8s-pod-network.10d3f659a483a654ce16debdc7e16476c66622df2e8565813df97820864bd97e" Workload="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--49mgw-eth0" Jan 24 00:33:29.457170 containerd[1990]: 2026-01-24 00:33:29.337 [INFO][5069] cni-plugin/k8s.go 418: Populated endpoint ContainerID="10d3f659a483a654ce16debdc7e16476c66622df2e8565813df97820864bd97e" Namespace="calico-apiserver" Pod="calico-apiserver-59955b8999-49mgw" WorkloadEndpoint="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--49mgw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--136-k8s-calico--apiserver--59955b8999--49mgw-eth0", GenerateName:"calico-apiserver-59955b8999-", Namespace:"calico-apiserver", SelfLink:"", UID:"44356a3b-6e7e-4852-a5bd-fffe6e033ca3", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59955b8999", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-136", ContainerID:"", Pod:"calico-apiserver-59955b8999-49mgw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali588e7564342", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:29.457170 containerd[1990]: 2026-01-24 00:33:29.337 [INFO][5069] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.69/32] ContainerID="10d3f659a483a654ce16debdc7e16476c66622df2e8565813df97820864bd97e" Namespace="calico-apiserver" Pod="calico-apiserver-59955b8999-49mgw" WorkloadEndpoint="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--49mgw-eth0" Jan 24 00:33:29.457170 containerd[1990]: 2026-01-24 00:33:29.337 [INFO][5069] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali588e7564342 ContainerID="10d3f659a483a654ce16debdc7e16476c66622df2e8565813df97820864bd97e" Namespace="calico-apiserver" Pod="calico-apiserver-59955b8999-49mgw" WorkloadEndpoint="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--49mgw-eth0" Jan 24 00:33:29.457170 containerd[1990]: 2026-01-24 00:33:29.382 [INFO][5069] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10d3f659a483a654ce16debdc7e16476c66622df2e8565813df97820864bd97e" Namespace="calico-apiserver" Pod="calico-apiserver-59955b8999-49mgw" WorkloadEndpoint="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--49mgw-eth0" Jan 24 00:33:29.457170 containerd[1990]: 2026-01-24 00:33:29.399 [INFO][5069] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="10d3f659a483a654ce16debdc7e16476c66622df2e8565813df97820864bd97e" Namespace="calico-apiserver" Pod="calico-apiserver-59955b8999-49mgw" WorkloadEndpoint="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--49mgw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--136-k8s-calico--apiserver--59955b8999--49mgw-eth0", GenerateName:"calico-apiserver-59955b8999-", Namespace:"calico-apiserver", SelfLink:"", UID:"44356a3b-6e7e-4852-a5bd-fffe6e033ca3", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59955b8999", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-136", ContainerID:"10d3f659a483a654ce16debdc7e16476c66622df2e8565813df97820864bd97e", Pod:"calico-apiserver-59955b8999-49mgw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali588e7564342", MAC:"2a:ac:e7:d7:94:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:29.457170 containerd[1990]: 2026-01-24 00:33:29.449 [INFO][5069] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="10d3f659a483a654ce16debdc7e16476c66622df2e8565813df97820864bd97e" Namespace="calico-apiserver" Pod="calico-apiserver-59955b8999-49mgw" WorkloadEndpoint="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--49mgw-eth0" Jan 24 00:33:29.466384 containerd[1990]: time="2026-01-24T00:33:29.446769498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:33:29.466384 containerd[1990]: time="2026-01-24T00:33:29.463194975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:29.466384 containerd[1990]: time="2026-01-24T00:33:29.463447619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:29.552694 systemd[1]: Started cri-containerd-9a378e49ca35967023a744c58e897c8d63744827580cfa7763989d53f7a0f629.scope - libcontainer container 9a378e49ca35967023a744c58e897c8d63744827580cfa7763989d53f7a0f629. Jan 24 00:33:29.597220 systemd-networkd[1622]: cali49598f0f84a: Link UP Jan 24 00:33:29.602300 systemd-networkd[1622]: cali49598f0f84a: Gained carrier Jan 24 00:33:29.627535 containerd[1990]: 2026-01-24 00:33:28.524 [INFO][5104] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--136-k8s-csi--node--driver--zgckl-eth0 csi-node-driver- calico-system 6dea86f8-2783-4942-8476-4f769af7b22d 997 0 2026-01-24 00:32:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-16-136 csi-node-driver-zgckl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali49598f0f84a [] [] }} ContainerID="1b6a41a8b6a3bb9b304f8167da84676c61bd90647a1210dd4ea604186347bfe7" Namespace="calico-system" Pod="csi-node-driver-zgckl" WorkloadEndpoint="ip--172--31--16--136-k8s-csi--node--driver--zgckl-" Jan 24 00:33:29.627535 containerd[1990]: 2026-01-24 00:33:28.524 [INFO][5104] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1b6a41a8b6a3bb9b304f8167da84676c61bd90647a1210dd4ea604186347bfe7" Namespace="calico-system" Pod="csi-node-driver-zgckl" WorkloadEndpoint="ip--172--31--16--136-k8s-csi--node--driver--zgckl-eth0" Jan 24 00:33:29.627535 containerd[1990]: 2026-01-24 00:33:28.669 [INFO][5163] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b6a41a8b6a3bb9b304f8167da84676c61bd90647a1210dd4ea604186347bfe7" HandleID="k8s-pod-network.1b6a41a8b6a3bb9b304f8167da84676c61bd90647a1210dd4ea604186347bfe7" Workload="ip--172--31--16--136-k8s-csi--node--driver--zgckl-eth0" Jan 24 00:33:29.627535 containerd[1990]: 2026-01-24 00:33:28.670 [INFO][5163] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1b6a41a8b6a3bb9b304f8167da84676c61bd90647a1210dd4ea604186347bfe7" HandleID="k8s-pod-network.1b6a41a8b6a3bb9b304f8167da84676c61bd90647a1210dd4ea604186347bfe7" Workload="ip--172--31--16--136-k8s-csi--node--driver--zgckl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f720), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-136", "pod":"csi-node-driver-zgckl", "timestamp":"2026-01-24 00:33:28.669971909 +0000 UTC"}, Hostname:"ip-172-31-16-136", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:33:29.627535 containerd[1990]: 2026-01-24 00:33:28.670 [INFO][5163] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:29.627535 containerd[1990]: 2026-01-24 00:33:29.316 [INFO][5163] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:29.627535 containerd[1990]: 2026-01-24 00:33:29.316 [INFO][5163] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-136' Jan 24 00:33:29.627535 containerd[1990]: 2026-01-24 00:33:29.364 [INFO][5163] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1b6a41a8b6a3bb9b304f8167da84676c61bd90647a1210dd4ea604186347bfe7" host="ip-172-31-16-136" Jan 24 00:33:29.627535 containerd[1990]: 2026-01-24 00:33:29.392 [INFO][5163] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-136" Jan 24 00:33:29.627535 containerd[1990]: 2026-01-24 00:33:29.444 [INFO][5163] ipam/ipam.go 511: Trying affinity for 192.168.10.64/26 host="ip-172-31-16-136" Jan 24 00:33:29.627535 containerd[1990]: 2026-01-24 00:33:29.471 [INFO][5163] ipam/ipam.go 158: Attempting to load block cidr=192.168.10.64/26 host="ip-172-31-16-136" Jan 24 00:33:29.627535 containerd[1990]: 2026-01-24 00:33:29.498 [INFO][5163] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.10.64/26 host="ip-172-31-16-136" Jan 24 00:33:29.627535 containerd[1990]: 2026-01-24 00:33:29.502 [INFO][5163] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.10.64/26 handle="k8s-pod-network.1b6a41a8b6a3bb9b304f8167da84676c61bd90647a1210dd4ea604186347bfe7" host="ip-172-31-16-136" Jan 24 00:33:29.627535 containerd[1990]: 2026-01-24 00:33:29.512 [INFO][5163] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1b6a41a8b6a3bb9b304f8167da84676c61bd90647a1210dd4ea604186347bfe7 Jan 24 00:33:29.627535 containerd[1990]: 2026-01-24 00:33:29.544 [INFO][5163] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.10.64/26 handle="k8s-pod-network.1b6a41a8b6a3bb9b304f8167da84676c61bd90647a1210dd4ea604186347bfe7" host="ip-172-31-16-136" Jan 24 00:33:29.627535 containerd[1990]: 2026-01-24 00:33:29.580 [INFO][5163] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.10.70/26] block=192.168.10.64/26 handle="k8s-pod-network.1b6a41a8b6a3bb9b304f8167da84676c61bd90647a1210dd4ea604186347bfe7" host="ip-172-31-16-136" Jan 24 00:33:29.627535 containerd[1990]: 2026-01-24 00:33:29.580 [INFO][5163] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.10.70/26] handle="k8s-pod-network.1b6a41a8b6a3bb9b304f8167da84676c61bd90647a1210dd4ea604186347bfe7" host="ip-172-31-16-136" Jan 24 00:33:29.627535 containerd[1990]: 2026-01-24 00:33:29.580 [INFO][5163] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:29.627535 containerd[1990]: 2026-01-24 00:33:29.580 [INFO][5163] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.10.70/26] IPv6=[] ContainerID="1b6a41a8b6a3bb9b304f8167da84676c61bd90647a1210dd4ea604186347bfe7" HandleID="k8s-pod-network.1b6a41a8b6a3bb9b304f8167da84676c61bd90647a1210dd4ea604186347bfe7" Workload="ip--172--31--16--136-k8s-csi--node--driver--zgckl-eth0" Jan 24 00:33:29.631046 containerd[1990]: 2026-01-24 00:33:29.587 [INFO][5104] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1b6a41a8b6a3bb9b304f8167da84676c61bd90647a1210dd4ea604186347bfe7" Namespace="calico-system" Pod="csi-node-driver-zgckl" WorkloadEndpoint="ip--172--31--16--136-k8s-csi--node--driver--zgckl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--136-k8s-csi--node--driver--zgckl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6dea86f8-2783-4942-8476-4f769af7b22d", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-136", ContainerID:"", Pod:"csi-node-driver-zgckl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.10.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali49598f0f84a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:29.631046 containerd[1990]: 2026-01-24 00:33:29.587 [INFO][5104] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.70/32] ContainerID="1b6a41a8b6a3bb9b304f8167da84676c61bd90647a1210dd4ea604186347bfe7" Namespace="calico-system" Pod="csi-node-driver-zgckl" WorkloadEndpoint="ip--172--31--16--136-k8s-csi--node--driver--zgckl-eth0" Jan 24 00:33:29.631046 containerd[1990]: 2026-01-24 00:33:29.587 [INFO][5104] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali49598f0f84a ContainerID="1b6a41a8b6a3bb9b304f8167da84676c61bd90647a1210dd4ea604186347bfe7" Namespace="calico-system" Pod="csi-node-driver-zgckl" WorkloadEndpoint="ip--172--31--16--136-k8s-csi--node--driver--zgckl-eth0" Jan 24 00:33:29.631046 containerd[1990]: 2026-01-24 00:33:29.600 [INFO][5104] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b6a41a8b6a3bb9b304f8167da84676c61bd90647a1210dd4ea604186347bfe7" Namespace="calico-system" Pod="csi-node-driver-zgckl" WorkloadEndpoint="ip--172--31--16--136-k8s-csi--node--driver--zgckl-eth0" Jan 24 00:33:29.631046 containerd[1990]: 2026-01-24 00:33:29.601 [INFO][5104] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1b6a41a8b6a3bb9b304f8167da84676c61bd90647a1210dd4ea604186347bfe7" Namespace="calico-system" Pod="csi-node-driver-zgckl" WorkloadEndpoint="ip--172--31--16--136-k8s-csi--node--driver--zgckl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--136-k8s-csi--node--driver--zgckl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6dea86f8-2783-4942-8476-4f769af7b22d", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-136", ContainerID:"1b6a41a8b6a3bb9b304f8167da84676c61bd90647a1210dd4ea604186347bfe7", Pod:"csi-node-driver-zgckl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.10.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali49598f0f84a", MAC:"36:c5:86:4e:03:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:29.631046 containerd[1990]: 2026-01-24 00:33:29.621 [INFO][5104] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1b6a41a8b6a3bb9b304f8167da84676c61bd90647a1210dd4ea604186347bfe7" Namespace="calico-system" Pod="csi-node-driver-zgckl" WorkloadEndpoint="ip--172--31--16--136-k8s-csi--node--driver--zgckl-eth0" Jan 24 00:33:29.696985 containerd[1990]: time="2026-01-24T00:33:29.688002822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:33:29.696985 containerd[1990]: time="2026-01-24T00:33:29.688094742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:33:29.696985 containerd[1990]: time="2026-01-24T00:33:29.688114258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:29.696985 containerd[1990]: time="2026-01-24T00:33:29.688305512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:29.771736 kubelet[3194]: I0124 00:33:29.769960 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d98daf60-e1b2-4bcf-bf77-7fe1f3510929-whisker-ca-bundle\") pod \"whisker-74b94c78c8-p5fzt\" (UID: \"d98daf60-e1b2-4bcf-bf77-7fe1f3510929\") " pod="calico-system/whisker-74b94c78c8-p5fzt" Jan 24 00:33:29.771736 kubelet[3194]: I0124 00:33:29.770049 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q86vj\" (UniqueName: \"kubernetes.io/projected/d98daf60-e1b2-4bcf-bf77-7fe1f3510929-kube-api-access-q86vj\") pod \"whisker-74b94c78c8-p5fzt\" (UID: \"d98daf60-e1b2-4bcf-bf77-7fe1f3510929\") " pod="calico-system/whisker-74b94c78c8-p5fzt" Jan 24 00:33:29.771736 kubelet[3194]: I0124 00:33:29.770102 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d98daf60-e1b2-4bcf-bf77-7fe1f3510929-whisker-backend-key-pair\") pod \"whisker-74b94c78c8-p5fzt\" (UID: \"d98daf60-e1b2-4bcf-bf77-7fe1f3510929\") " pod="calico-system/whisker-74b94c78c8-p5fzt" Jan 24 00:33:29.782348 systemd[1]: Created slice kubepods-besteffort-podd98daf60_e1b2_4bcf_bf77_7fe1f3510929.slice - libcontainer container kubepods-besteffort-podd98daf60_e1b2_4bcf_bf77_7fe1f3510929.slice. Jan 24 00:33:29.799165 containerd[1990]: time="2026-01-24T00:33:29.798778329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vpf5l,Uid:8d604c20-234e-4790-af46-e3ccb6ebbab2,Namespace:kube-system,Attempt:1,} returns sandbox id \"5b942064ddb4524e1b9ed9fa151771316a65a0214b8aa0d8040c8281f52d70da\"" Jan 24 00:33:29.866015 systemd[1]: run-containerd-runc-k8s.io-5b942064ddb4524e1b9ed9fa151771316a65a0214b8aa0d8040c8281f52d70da-runc.diJkuD.mount: Deactivated successfully. Jan 24 00:33:29.913485 systemd-networkd[1622]: cali620e5cd1b75: Link UP Jan 24 00:33:29.929042 systemd-networkd[1622]: cali620e5cd1b75: Gained carrier Jan 24 00:33:29.973776 containerd[1990]: time="2026-01-24T00:33:29.966888275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:33:29.973776 containerd[1990]: time="2026-01-24T00:33:29.966968049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:33:29.973776 containerd[1990]: time="2026-01-24T00:33:29.966992246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:29.973776 containerd[1990]: time="2026-01-24T00:33:29.967097832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:29.981000 systemd[1]: Started cri-containerd-10d3f659a483a654ce16debdc7e16476c66622df2e8565813df97820864bd97e.scope - libcontainer container 10d3f659a483a654ce16debdc7e16476c66622df2e8565813df97820864bd97e. Jan 24 00:33:30.030814 containerd[1990]: 2026-01-24 00:33:28.477 [INFO][5077] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--136-k8s-goldmane--666569f655--t69bh-eth0 goldmane-666569f655- calico-system 2953039e-0e7f-4027-9a3c-137a03fa2153 1021 0 2026-01-24 00:32:50 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-16-136 goldmane-666569f655-t69bh eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali620e5cd1b75 [] [] }} ContainerID="15983daa770ec5a5acc761ed6e6dc71285b7dde1f05fc39a422ac056831422e9" Namespace="calico-system" Pod="goldmane-666569f655-t69bh" WorkloadEndpoint="ip--172--31--16--136-k8s-goldmane--666569f655--t69bh-" Jan 24 00:33:30.030814 containerd[1990]: 2026-01-24 00:33:28.479 [INFO][5077] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="15983daa770ec5a5acc761ed6e6dc71285b7dde1f05fc39a422ac056831422e9" Namespace="calico-system" Pod="goldmane-666569f655-t69bh" WorkloadEndpoint="ip--172--31--16--136-k8s-goldmane--666569f655--t69bh-eth0" Jan 24 00:33:30.030814 containerd[1990]: 2026-01-24 00:33:28.669 [INFO][5160] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15983daa770ec5a5acc761ed6e6dc71285b7dde1f05fc39a422ac056831422e9" HandleID="k8s-pod-network.15983daa770ec5a5acc761ed6e6dc71285b7dde1f05fc39a422ac056831422e9" Workload="ip--172--31--16--136-k8s-goldmane--666569f655--t69bh-eth0" Jan 24 00:33:30.030814 containerd[1990]: 2026-01-24 00:33:28.670 [INFO][5160] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="15983daa770ec5a5acc761ed6e6dc71285b7dde1f05fc39a422ac056831422e9" HandleID="k8s-pod-network.15983daa770ec5a5acc761ed6e6dc71285b7dde1f05fc39a422ac056831422e9" Workload="ip--172--31--16--136-k8s-goldmane--666569f655--t69bh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e4f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-136", "pod":"goldmane-666569f655-t69bh", "timestamp":"2026-01-24 00:33:28.669714509 +0000 UTC"}, Hostname:"ip-172-31-16-136", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:33:30.030814 containerd[1990]: 2026-01-24 00:33:28.670 [INFO][5160] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:30.030814 containerd[1990]: 2026-01-24 00:33:29.580 [INFO][5160] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:30.030814 containerd[1990]: 2026-01-24 00:33:29.580 [INFO][5160] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-136' Jan 24 00:33:30.030814 containerd[1990]: 2026-01-24 00:33:29.622 [INFO][5160] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.15983daa770ec5a5acc761ed6e6dc71285b7dde1f05fc39a422ac056831422e9" host="ip-172-31-16-136" Jan 24 00:33:30.030814 containerd[1990]: 2026-01-24 00:33:29.665 [INFO][5160] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-136" Jan 24 00:33:30.030814 containerd[1990]: 2026-01-24 00:33:29.701 [INFO][5160] ipam/ipam.go 511: Trying affinity for 192.168.10.64/26 host="ip-172-31-16-136" Jan 24 00:33:30.030814 containerd[1990]: 2026-01-24 00:33:29.716 [INFO][5160] ipam/ipam.go 158: Attempting to load block cidr=192.168.10.64/26 host="ip-172-31-16-136" Jan 24 00:33:30.030814 containerd[1990]: 2026-01-24 00:33:29.727 [INFO][5160] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.10.64/26 host="ip-172-31-16-136" Jan 24 00:33:30.030814 containerd[1990]: 2026-01-24 00:33:29.727 [INFO][5160] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.10.64/26 handle="k8s-pod-network.15983daa770ec5a5acc761ed6e6dc71285b7dde1f05fc39a422ac056831422e9" host="ip-172-31-16-136" Jan 24 00:33:30.030814 containerd[1990]: 2026-01-24 00:33:29.756 [INFO][5160] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.15983daa770ec5a5acc761ed6e6dc71285b7dde1f05fc39a422ac056831422e9 Jan 24 00:33:30.030814 containerd[1990]: 2026-01-24 00:33:29.771 [INFO][5160] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.10.64/26 handle="k8s-pod-network.15983daa770ec5a5acc761ed6e6dc71285b7dde1f05fc39a422ac056831422e9" host="ip-172-31-16-136" Jan 24 00:33:30.030814 containerd[1990]: 2026-01-24 00:33:29.819 [INFO][5160] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.10.71/26] block=192.168.10.64/26 handle="k8s-pod-network.15983daa770ec5a5acc761ed6e6dc71285b7dde1f05fc39a422ac056831422e9" host="ip-172-31-16-136" Jan 24 00:33:30.030814 containerd[1990]: 2026-01-24 00:33:29.819 [INFO][5160] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.10.71/26] handle="k8s-pod-network.15983daa770ec5a5acc761ed6e6dc71285b7dde1f05fc39a422ac056831422e9" host="ip-172-31-16-136" Jan 24 00:33:30.030814 containerd[1990]: 2026-01-24 00:33:29.819 [INFO][5160] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:30.030814 containerd[1990]: 2026-01-24 00:33:29.819 [INFO][5160] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.10.71/26] IPv6=[] ContainerID="15983daa770ec5a5acc761ed6e6dc71285b7dde1f05fc39a422ac056831422e9" HandleID="k8s-pod-network.15983daa770ec5a5acc761ed6e6dc71285b7dde1f05fc39a422ac056831422e9" Workload="ip--172--31--16--136-k8s-goldmane--666569f655--t69bh-eth0" Jan 24 00:33:30.032299 containerd[1990]: 2026-01-24 00:33:29.879 [INFO][5077] cni-plugin/k8s.go 418: Populated endpoint ContainerID="15983daa770ec5a5acc761ed6e6dc71285b7dde1f05fc39a422ac056831422e9" Namespace="calico-system" Pod="goldmane-666569f655-t69bh" WorkloadEndpoint="ip--172--31--16--136-k8s-goldmane--666569f655--t69bh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--136-k8s-goldmane--666569f655--t69bh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"2953039e-0e7f-4027-9a3c-137a03fa2153", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-136", ContainerID:"", Pod:"goldmane-666569f655-t69bh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.10.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali620e5cd1b75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:30.032299 containerd[1990]: 2026-01-24 00:33:29.879 [INFO][5077] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.71/32] ContainerID="15983daa770ec5a5acc761ed6e6dc71285b7dde1f05fc39a422ac056831422e9" Namespace="calico-system" Pod="goldmane-666569f655-t69bh" WorkloadEndpoint="ip--172--31--16--136-k8s-goldmane--666569f655--t69bh-eth0" Jan 24 00:33:30.032299 containerd[1990]: 2026-01-24 00:33:29.880 [INFO][5077] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali620e5cd1b75 ContainerID="15983daa770ec5a5acc761ed6e6dc71285b7dde1f05fc39a422ac056831422e9" Namespace="calico-system" Pod="goldmane-666569f655-t69bh" WorkloadEndpoint="ip--172--31--16--136-k8s-goldmane--666569f655--t69bh-eth0" Jan 24 00:33:30.032299 containerd[1990]: 2026-01-24 00:33:29.934 [INFO][5077] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15983daa770ec5a5acc761ed6e6dc71285b7dde1f05fc39a422ac056831422e9" Namespace="calico-system" Pod="goldmane-666569f655-t69bh" WorkloadEndpoint="ip--172--31--16--136-k8s-goldmane--666569f655--t69bh-eth0" Jan 24 00:33:30.032299 containerd[1990]: 2026-01-24 00:33:29.956 [INFO][5077] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="15983daa770ec5a5acc761ed6e6dc71285b7dde1f05fc39a422ac056831422e9" Namespace="calico-system" Pod="goldmane-666569f655-t69bh" WorkloadEndpoint="ip--172--31--16--136-k8s-goldmane--666569f655--t69bh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--136-k8s-goldmane--666569f655--t69bh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"2953039e-0e7f-4027-9a3c-137a03fa2153", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-136", ContainerID:"15983daa770ec5a5acc761ed6e6dc71285b7dde1f05fc39a422ac056831422e9", Pod:"goldmane-666569f655-t69bh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.10.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali620e5cd1b75", MAC:"c6:3e:3c:6e:eb:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:30.032299 containerd[1990]: 2026-01-24 00:33:29.999 [INFO][5077] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="15983daa770ec5a5acc761ed6e6dc71285b7dde1f05fc39a422ac056831422e9" Namespace="calico-system" Pod="goldmane-666569f655-t69bh" WorkloadEndpoint="ip--172--31--16--136-k8s-goldmane--666569f655--t69bh-eth0" Jan 24 00:33:30.046281 containerd[1990]: time="2026-01-24T00:33:30.044793688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59955b8999-7njzz,Uid:8bd1b1e2-6c2f-496d-84df-3687f4a4a992,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e06dc426c84c8c019b9dd7cc6d1906d0159aa9563c58baf13cbd1bd7c4ef6f5b\"" Jan 24 00:33:30.047484 containerd[1990]: time="2026-01-24T00:33:30.047123802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xk4gx,Uid:e6c717e9-efbf-49cc-b817-198682317a0f,Namespace:kube-system,Attempt:1,} returns sandbox id \"9a378e49ca35967023a744c58e897c8d63744827580cfa7763989d53f7a0f629\"" Jan 24 00:33:30.056069 containerd[1990]: time="2026-01-24T00:33:30.055922076Z" level=info msg="CreateContainer within sandbox \"5b942064ddb4524e1b9ed9fa151771316a65a0214b8aa0d8040c8281f52d70da\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:33:30.077195 containerd[1990]: time="2026-01-24T00:33:30.076552324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-777f8fb74-d8qgp,Uid:0c7963cb-5f76-453d-b9ca-f28ed3f17ce0,Namespace:calico-system,Attempt:1,} returns sandbox id \"6ab6a55cae1e79f10a29a40395711ddb857b9a3d5cc5d8d962bf0f20e00c293c\"" Jan 24 00:33:30.092984 containerd[1990]: time="2026-01-24T00:33:30.092283459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:33:30.106017 containerd[1990]: time="2026-01-24T00:33:30.105972481Z" level=info msg="CreateContainer within sandbox \"9a378e49ca35967023a744c58e897c8d63744827580cfa7763989d53f7a0f629\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:33:30.116422 systemd[1]: Started cri-containerd-1b6a41a8b6a3bb9b304f8167da84676c61bd90647a1210dd4ea604186347bfe7.scope - libcontainer container 1b6a41a8b6a3bb9b304f8167da84676c61bd90647a1210dd4ea604186347bfe7. Jan 24 00:33:30.159644 containerd[1990]: time="2026-01-24T00:33:30.159580207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74b94c78c8-p5fzt,Uid:d98daf60-e1b2-4bcf-bf77-7fe1f3510929,Namespace:calico-system,Attempt:0,}" Jan 24 00:33:30.189186 containerd[1990]: time="2026-01-24T00:33:30.185543602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:33:30.189186 containerd[1990]: time="2026-01-24T00:33:30.185613362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:33:30.189186 containerd[1990]: time="2026-01-24T00:33:30.185630051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:30.189186 containerd[1990]: time="2026-01-24T00:33:30.185739832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:30.190096 containerd[1990]: time="2026-01-24T00:33:30.190052714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59955b8999-49mgw,Uid:44356a3b-6e7e-4852-a5bd-fffe6e033ca3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"10d3f659a483a654ce16debdc7e16476c66622df2e8565813df97820864bd97e\"" Jan 24 00:33:30.220756 systemd-networkd[1622]: calif2ff9cc824b: Gained IPv6LL Jan 24 00:33:30.235762 containerd[1990]: time="2026-01-24T00:33:30.235728047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zgckl,Uid:6dea86f8-2783-4942-8476-4f769af7b22d,Namespace:calico-system,Attempt:1,} returns sandbox id \"1b6a41a8b6a3bb9b304f8167da84676c61bd90647a1210dd4ea604186347bfe7\"" Jan 24 00:33:30.237397 systemd[1]: Started cri-containerd-15983daa770ec5a5acc761ed6e6dc71285b7dde1f05fc39a422ac056831422e9.scope - libcontainer container 15983daa770ec5a5acc761ed6e6dc71285b7dde1f05fc39a422ac056831422e9. Jan 24 00:33:30.300005 kubelet[3194]: I0124 00:33:30.299938 3194 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0ff43df-473e-4ec1-a924-95f6d305484f" path="/var/lib/kubelet/pods/e0ff43df-473e-4ec1-a924-95f6d305484f/volumes" Jan 24 00:33:30.340285 containerd[1990]: time="2026-01-24T00:33:30.339929143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-t69bh,Uid:2953039e-0e7f-4027-9a3c-137a03fa2153,Namespace:calico-system,Attempt:1,} returns sandbox id \"15983daa770ec5a5acc761ed6e6dc71285b7dde1f05fc39a422ac056831422e9\"" Jan 24 00:33:30.409508 systemd-networkd[1622]: calibbb1761c22f: Gained IPv6LL Jan 24 00:33:30.411265 containerd[1990]: time="2026-01-24T00:33:30.411231679Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:30.414769 containerd[1990]: time="2026-01-24T00:33:30.414701529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:33:30.423648 containerd[1990]: time="2026-01-24T00:33:30.415135557Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:33:30.429749 kubelet[3194]: E0124 00:33:30.424055 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:33:30.430691 kubelet[3194]: E0124 00:33:30.430656 3194 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:33:30.431562 containerd[1990]: time="2026-01-24T00:33:30.431492644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:33:30.439228 kubelet[3194]: E0124 00:33:30.438304 3194 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j7djh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59955b8999-7njzz_calico-apiserver(8bd1b1e2-6c2f-496d-84df-3687f4a4a992): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:30.439561 kubelet[3194]: E0124 00:33:30.439513 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59955b8999-7njzz" podUID="8bd1b1e2-6c2f-496d-84df-3687f4a4a992" Jan 24 00:33:30.462586 systemd-networkd[1622]: cali33d590ef719: Link UP Jan 24 00:33:30.465557 systemd-networkd[1622]: cali33d590ef719: Gained carrier Jan 24 00:33:30.493121 containerd[1990]: 2026-01-24 00:33:30.358 [INFO][5524] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--136-k8s-whisker--74b94c78c8--p5fzt-eth0 whisker-74b94c78c8- calico-system d98daf60-e1b2-4bcf-bf77-7fe1f3510929 1062 0 2026-01-24 00:33:29 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:74b94c78c8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-16-136 whisker-74b94c78c8-p5fzt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali33d590ef719 [] [] }} ContainerID="2a35418749c38d3773813ef569c4a5bdcc791c0bd24e7196c8a7f3f97f3c399e" Namespace="calico-system" Pod="whisker-74b94c78c8-p5fzt" WorkloadEndpoint="ip--172--31--16--136-k8s-whisker--74b94c78c8--p5fzt-" Jan 24 00:33:30.493121 containerd[1990]: 2026-01-24 00:33:30.359 [INFO][5524] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a35418749c38d3773813ef569c4a5bdcc791c0bd24e7196c8a7f3f97f3c399e" Namespace="calico-system" Pod="whisker-74b94c78c8-p5fzt" WorkloadEndpoint="ip--172--31--16--136-k8s-whisker--74b94c78c8--p5fzt-eth0" Jan 24 00:33:30.493121 containerd[1990]: 2026-01-24 00:33:30.389 [INFO][5542] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a35418749c38d3773813ef569c4a5bdcc791c0bd24e7196c8a7f3f97f3c399e" HandleID="k8s-pod-network.2a35418749c38d3773813ef569c4a5bdcc791c0bd24e7196c8a7f3f97f3c399e" Workload="ip--172--31--16--136-k8s-whisker--74b94c78c8--p5fzt-eth0" Jan 24 00:33:30.493121 containerd[1990]: 2026-01-24 00:33:30.390 [INFO][5542] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2a35418749c38d3773813ef569c4a5bdcc791c0bd24e7196c8a7f3f97f3c399e" HandleID="k8s-pod-network.2a35418749c38d3773813ef569c4a5bdcc791c0bd24e7196c8a7f3f97f3c399e" Workload="ip--172--31--16--136-k8s-whisker--74b94c78c8--p5fzt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb6e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-136", "pod":"whisker-74b94c78c8-p5fzt", "timestamp":"2026-01-24 00:33:30.3899467 +0000 UTC"}, Hostname:"ip-172-31-16-136", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:33:30.493121 containerd[1990]: 2026-01-24 00:33:30.390 [INFO][5542] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:33:30.493121 containerd[1990]: 2026-01-24 00:33:30.390 [INFO][5542] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:33:30.493121 containerd[1990]: 2026-01-24 00:33:30.390 [INFO][5542] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-136' Jan 24 00:33:30.493121 containerd[1990]: 2026-01-24 00:33:30.398 [INFO][5542] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2a35418749c38d3773813ef569c4a5bdcc791c0bd24e7196c8a7f3f97f3c399e" host="ip-172-31-16-136" Jan 24 00:33:30.493121 containerd[1990]: 2026-01-24 00:33:30.404 [INFO][5542] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-16-136" Jan 24 00:33:30.493121 containerd[1990]: 2026-01-24 00:33:30.411 [INFO][5542] ipam/ipam.go 511: Trying affinity for 192.168.10.64/26 host="ip-172-31-16-136" Jan 24 00:33:30.493121 containerd[1990]: 2026-01-24 00:33:30.413 [INFO][5542] ipam/ipam.go 158: Attempting to load block cidr=192.168.10.64/26 host="ip-172-31-16-136" Jan 24 00:33:30.493121 containerd[1990]: 2026-01-24 00:33:30.416 [INFO][5542] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.10.64/26 host="ip-172-31-16-136" Jan 24 00:33:30.493121 containerd[1990]: 2026-01-24 00:33:30.416 [INFO][5542] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.10.64/26 handle="k8s-pod-network.2a35418749c38d3773813ef569c4a5bdcc791c0bd24e7196c8a7f3f97f3c399e" host="ip-172-31-16-136" Jan 24 00:33:30.493121 containerd[1990]: 2026-01-24 00:33:30.419 [INFO][5542] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2a35418749c38d3773813ef569c4a5bdcc791c0bd24e7196c8a7f3f97f3c399e Jan 24 00:33:30.493121 containerd[1990]: 2026-01-24 00:33:30.427 [INFO][5542] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.10.64/26 handle="k8s-pod-network.2a35418749c38d3773813ef569c4a5bdcc791c0bd24e7196c8a7f3f97f3c399e" host="ip-172-31-16-136" Jan 24 00:33:30.493121 containerd[1990]: 2026-01-24 00:33:30.442 [INFO][5542] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.10.72/26] block=192.168.10.64/26 handle="k8s-pod-network.2a35418749c38d3773813ef569c4a5bdcc791c0bd24e7196c8a7f3f97f3c399e" host="ip-172-31-16-136" Jan 24 00:33:30.493121 containerd[1990]: 2026-01-24 00:33:30.442 [INFO][5542] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.10.72/26] handle="k8s-pod-network.2a35418749c38d3773813ef569c4a5bdcc791c0bd24e7196c8a7f3f97f3c399e" host="ip-172-31-16-136" Jan 24 00:33:30.493121 containerd[1990]: 2026-01-24 00:33:30.442 [INFO][5542] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:33:30.493121 containerd[1990]: 2026-01-24 00:33:30.442 [INFO][5542] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.10.72/26] IPv6=[] ContainerID="2a35418749c38d3773813ef569c4a5bdcc791c0bd24e7196c8a7f3f97f3c399e" HandleID="k8s-pod-network.2a35418749c38d3773813ef569c4a5bdcc791c0bd24e7196c8a7f3f97f3c399e" Workload="ip--172--31--16--136-k8s-whisker--74b94c78c8--p5fzt-eth0" Jan 24 00:33:30.494099 containerd[1990]: 2026-01-24 00:33:30.447 [INFO][5524] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a35418749c38d3773813ef569c4a5bdcc791c0bd24e7196c8a7f3f97f3c399e" Namespace="calico-system" Pod="whisker-74b94c78c8-p5fzt" WorkloadEndpoint="ip--172--31--16--136-k8s-whisker--74b94c78c8--p5fzt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--136-k8s-whisker--74b94c78c8--p5fzt-eth0", GenerateName:"whisker-74b94c78c8-", Namespace:"calico-system", SelfLink:"", UID:"d98daf60-e1b2-4bcf-bf77-7fe1f3510929", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 33, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74b94c78c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-136", ContainerID:"", Pod:"whisker-74b94c78c8-p5fzt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.10.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali33d590ef719", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:30.494099 containerd[1990]: 2026-01-24 00:33:30.448 [INFO][5524] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.72/32] ContainerID="2a35418749c38d3773813ef569c4a5bdcc791c0bd24e7196c8a7f3f97f3c399e" Namespace="calico-system" Pod="whisker-74b94c78c8-p5fzt" WorkloadEndpoint="ip--172--31--16--136-k8s-whisker--74b94c78c8--p5fzt-eth0" Jan 24 00:33:30.494099 containerd[1990]: 2026-01-24 00:33:30.448 [INFO][5524] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali33d590ef719 ContainerID="2a35418749c38d3773813ef569c4a5bdcc791c0bd24e7196c8a7f3f97f3c399e" Namespace="calico-system" Pod="whisker-74b94c78c8-p5fzt" WorkloadEndpoint="ip--172--31--16--136-k8s-whisker--74b94c78c8--p5fzt-eth0" Jan 24 00:33:30.494099 containerd[1990]: 2026-01-24 00:33:30.470 [INFO][5524] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a35418749c38d3773813ef569c4a5bdcc791c0bd24e7196c8a7f3f97f3c399e" Namespace="calico-system" Pod="whisker-74b94c78c8-p5fzt" WorkloadEndpoint="ip--172--31--16--136-k8s-whisker--74b94c78c8--p5fzt-eth0" Jan 24 00:33:30.494099 containerd[1990]: 2026-01-24 00:33:30.476 [INFO][5524] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a35418749c38d3773813ef569c4a5bdcc791c0bd24e7196c8a7f3f97f3c399e" Namespace="calico-system" Pod="whisker-74b94c78c8-p5fzt" WorkloadEndpoint="ip--172--31--16--136-k8s-whisker--74b94c78c8--p5fzt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--136-k8s-whisker--74b94c78c8--p5fzt-eth0", GenerateName:"whisker-74b94c78c8-", Namespace:"calico-system", SelfLink:"", UID:"d98daf60-e1b2-4bcf-bf77-7fe1f3510929", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 33, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74b94c78c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-136", ContainerID:"2a35418749c38d3773813ef569c4a5bdcc791c0bd24e7196c8a7f3f97f3c399e", Pod:"whisker-74b94c78c8-p5fzt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.10.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali33d590ef719", MAC:"46:f8:94:84:e1:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:33:30.494099 containerd[1990]: 2026-01-24 00:33:30.489 [INFO][5524] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a35418749c38d3773813ef569c4a5bdcc791c0bd24e7196c8a7f3f97f3c399e" Namespace="calico-system" Pod="whisker-74b94c78c8-p5fzt" WorkloadEndpoint="ip--172--31--16--136-k8s-whisker--74b94c78c8--p5fzt-eth0" Jan 24 00:33:30.517580 containerd[1990]: time="2026-01-24T00:33:30.517535112Z" level=info msg="CreateContainer within sandbox \"9a378e49ca35967023a744c58e897c8d63744827580cfa7763989d53f7a0f629\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"368bc254fc16f497a813ff412b12f725a0bbde359173c929e779d70f0fd0dd1e\"" Jan 24 00:33:30.519333 containerd[1990]: time="2026-01-24T00:33:30.519287714Z" level=info msg="StartContainer for \"368bc254fc16f497a813ff412b12f725a0bbde359173c929e779d70f0fd0dd1e\"" Jan 24 00:33:30.526619 containerd[1990]: time="2026-01-24T00:33:30.526571362Z" level=info msg="CreateContainer within sandbox \"5b942064ddb4524e1b9ed9fa151771316a65a0214b8aa0d8040c8281f52d70da\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"755ed7ce7214c82b5879c9ee3c9f9bec08a9cd73882cc508a9f67288cc485f96\"" Jan 24 00:33:30.527341 containerd[1990]: time="2026-01-24T00:33:30.525668370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:33:30.527341 containerd[1990]: time="2026-01-24T00:33:30.525733429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:33:30.527341 containerd[1990]: time="2026-01-24T00:33:30.525745746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:30.527341 containerd[1990]: time="2026-01-24T00:33:30.525833303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:33:30.532094 containerd[1990]: time="2026-01-24T00:33:30.530178205Z" level=info msg="StartContainer for \"755ed7ce7214c82b5879c9ee3c9f9bec08a9cd73882cc508a9f67288cc485f96\"" Jan 24 00:33:30.575584 systemd[1]: Started cri-containerd-2a35418749c38d3773813ef569c4a5bdcc791c0bd24e7196c8a7f3f97f3c399e.scope - libcontainer container 2a35418749c38d3773813ef569c4a5bdcc791c0bd24e7196c8a7f3f97f3c399e. Jan 24 00:33:30.581765 systemd[1]: Started cri-containerd-368bc254fc16f497a813ff412b12f725a0bbde359173c929e779d70f0fd0dd1e.scope - libcontainer container 368bc254fc16f497a813ff412b12f725a0bbde359173c929e779d70f0fd0dd1e. Jan 24 00:33:30.596429 systemd[1]: Started cri-containerd-755ed7ce7214c82b5879c9ee3c9f9bec08a9cd73882cc508a9f67288cc485f96.scope - libcontainer container 755ed7ce7214c82b5879c9ee3c9f9bec08a9cd73882cc508a9f67288cc485f96. Jan 24 00:33:30.676132 containerd[1990]: time="2026-01-24T00:33:30.675998437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74b94c78c8-p5fzt,Uid:d98daf60-e1b2-4bcf-bf77-7fe1f3510929,Namespace:calico-system,Attempt:0,} returns sandbox id \"2a35418749c38d3773813ef569c4a5bdcc791c0bd24e7196c8a7f3f97f3c399e\"" Jan 24 00:33:30.730815 containerd[1990]: time="2026-01-24T00:33:30.730780319Z" level=info msg="StartContainer for \"368bc254fc16f497a813ff412b12f725a0bbde359173c929e779d70f0fd0dd1e\" returns successfully" Jan 24 00:33:30.730986 containerd[1990]: time="2026-01-24T00:33:30.730786765Z" level=info msg="StartContainer for \"755ed7ce7214c82b5879c9ee3c9f9bec08a9cd73882cc508a9f67288cc485f96\" returns successfully" Jan 24 00:33:30.731966 containerd[1990]: time="2026-01-24T00:33:30.731890744Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:30.734448 containerd[1990]: time="2026-01-24T00:33:30.734265480Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:33:30.734448 containerd[1990]: time="2026-01-24T00:33:30.734377861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:33:30.735453 kubelet[3194]: E0124 00:33:30.735398 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:33:30.735453 kubelet[3194]: E0124 00:33:30.735447 3194 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:33:30.737354 containerd[1990]: time="2026-01-24T00:33:30.736953459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:33:30.737468 kubelet[3194]: E0124 00:33:30.737115 3194 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jr4vw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-777f8fb74-d8qgp_calico-system(0c7963cb-5f76-453d-b9ca-f28ed3f17ce0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:30.738429 kubelet[3194]: E0124 00:33:30.738388 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-777f8fb74-d8qgp" podUID="0c7963cb-5f76-453d-b9ca-f28ed3f17ce0" Jan 24 00:33:30.793379 systemd-networkd[1622]: cali49598f0f84a: Gained IPv6LL Jan 24 00:33:30.923241 systemd-networkd[1622]: calib7619293cb5: Gained IPv6LL Jan 24 00:33:30.989553 containerd[1990]: time="2026-01-24T00:33:30.989419079Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:30.992020 containerd[1990]: time="2026-01-24T00:33:30.991914902Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:33:30.992020 containerd[1990]: time="2026-01-24T00:33:30.991969460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:33:30.992309 kubelet[3194]: E0124 00:33:30.992268 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:33:30.993341 kubelet[3194]: E0124 00:33:30.992322 3194 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:33:30.993341 kubelet[3194]: E0124 00:33:30.992940 3194 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zr42q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59955b8999-49mgw_calico-apiserver(44356a3b-6e7e-4852-a5bd-fffe6e033ca3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:30.993470 containerd[1990]: time="2026-01-24T00:33:30.992646029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:33:30.995170 kubelet[3194]: E0124 00:33:30.995085 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59955b8999-49mgw" podUID="44356a3b-6e7e-4852-a5bd-fffe6e033ca3" Jan 24 00:33:31.177546 systemd-networkd[1622]: cali26dd29f8fc9: Gained IPv6LL Jan 24 00:33:31.178369 systemd-networkd[1622]: cali588e7564342: Gained IPv6LL Jan 24 00:33:31.215562 kubelet[3194]: E0124 00:33:31.214770 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59955b8999-49mgw" podUID="44356a3b-6e7e-4852-a5bd-fffe6e033ca3" Jan 24 00:33:31.217897 kubelet[3194]: E0124 00:33:31.217857 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-777f8fb74-d8qgp" podUID="0c7963cb-5f76-453d-b9ca-f28ed3f17ce0" Jan 24 00:33:31.218065 kubelet[3194]: E0124 00:33:31.217942 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59955b8999-7njzz" podUID="8bd1b1e2-6c2f-496d-84df-3687f4a4a992" Jan 24 00:33:31.218190 containerd[1990]: time="2026-01-24T00:33:31.218125934Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:31.229524 containerd[1990]: time="2026-01-24T00:33:31.220344952Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:33:31.229524 containerd[1990]: time="2026-01-24T00:33:31.220437955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:33:31.229524 containerd[1990]: time="2026-01-24T00:33:31.221713599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:33:31.230128 kubelet[3194]: E0124 00:33:31.220544 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:33:31.230128 kubelet[3194]: E0124 00:33:31.220575 3194 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:33:31.230128 kubelet[3194]: E0124 00:33:31.220721 3194 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8bzv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zgckl_calico-system(6dea86f8-2783-4942-8476-4f769af7b22d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:31.241735 systemd-networkd[1622]: cali620e5cd1b75: Gained IPv6LL Jan 24 00:33:31.254272 kubelet[3194]: I0124 00:33:31.253555 3194 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xk4gx" podStartSLOduration=85.251950409 podStartE2EDuration="1m25.251950409s" podCreationTimestamp="2026-01-24 00:32:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:33:31.23153772 +0000 UTC m=+91.785493636" watchObservedRunningTime="2026-01-24 00:33:31.251950409 +0000 UTC m=+91.805906316" Jan 24 00:33:31.255044 kubelet[3194]: I0124 00:33:31.254581 3194 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vpf5l" podStartSLOduration=85.254569087 podStartE2EDuration="1m25.254569087s" podCreationTimestamp="2026-01-24 00:32:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:33:31.254324716 +0000 UTC m=+91.808280611" watchObservedRunningTime="2026-01-24 00:33:31.254569087 +0000 UTC m=+91.808524996" Jan 24 00:33:31.512404 containerd[1990]: time="2026-01-24T00:33:31.512276572Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:31.514673 containerd[1990]: time="2026-01-24T00:33:31.514608636Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:33:31.514875 containerd[1990]: time="2026-01-24T00:33:31.514640423Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:33:31.514913 kubelet[3194]: E0124 00:33:31.514842 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:33:31.514913 kubelet[3194]: E0124 00:33:31.514883 3194 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:33:31.515685 kubelet[3194]: E0124 00:33:31.515226 3194 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bs2mk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-t69bh_calico-system(2953039e-0e7f-4027-9a3c-137a03fa2153): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:31.516049 containerd[1990]: time="2026-01-24T00:33:31.515281962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:33:31.517196 kubelet[3194]: E0124 00:33:31.516980 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t69bh" podUID="2953039e-0e7f-4027-9a3c-137a03fa2153" Jan 24 00:33:31.782799 containerd[1990]: time="2026-01-24T00:33:31.782671162Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:31.785157 containerd[1990]: time="2026-01-24T00:33:31.784959840Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:33:31.785295 containerd[1990]: time="2026-01-24T00:33:31.785197986Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:33:31.785520 kubelet[3194]: E0124 00:33:31.785485 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:33:31.785588 kubelet[3194]: E0124 00:33:31.785531 3194 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:33:31.785946 kubelet[3194]: E0124 00:33:31.785781 3194 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e19c64a1732d4616946f432813c0113d,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q86vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74b94c78c8-p5fzt_calico-system(d98daf60-e1b2-4bcf-bf77-7fe1f3510929): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:31.786089 containerd[1990]: time="2026-01-24T00:33:31.785910837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:33:31.881784 systemd-networkd[1622]: cali33d590ef719: Gained IPv6LL Jan 24 00:33:32.034225 containerd[1990]: time="2026-01-24T00:33:32.034093208Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:32.036354 containerd[1990]: time="2026-01-24T00:33:32.036277781Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:33:32.036550 containerd[1990]: time="2026-01-24T00:33:32.036382367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:33:32.036592 kubelet[3194]: E0124 00:33:32.036540 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:33:32.038567 kubelet[3194]: E0124 00:33:32.036594 3194 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:33:32.038567 kubelet[3194]: E0124 00:33:32.036821 3194 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8bzv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zgckl_calico-system(6dea86f8-2783-4942-8476-4f769af7b22d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:32.038722 containerd[1990]: time="2026-01-24T00:33:32.037199585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:33:32.038759 kubelet[3194]: E0124 00:33:32.038713 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zgckl" podUID="6dea86f8-2783-4942-8476-4f769af7b22d" Jan 24 00:33:32.221544 kubelet[3194]: E0124 00:33:32.220691 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-777f8fb74-d8qgp" podUID="0c7963cb-5f76-453d-b9ca-f28ed3f17ce0" Jan 24 00:33:32.221544 kubelet[3194]: E0124 00:33:32.220857 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59955b8999-49mgw" podUID="44356a3b-6e7e-4852-a5bd-fffe6e033ca3" Jan 24 00:33:32.221544 kubelet[3194]: E0124 00:33:32.221482 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t69bh" podUID="2953039e-0e7f-4027-9a3c-137a03fa2153" Jan 24 00:33:32.221912 kubelet[3194]: E0124 00:33:32.221617 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zgckl" podUID="6dea86f8-2783-4942-8476-4f769af7b22d" Jan 24 00:33:32.307194 containerd[1990]: time="2026-01-24T00:33:32.306855588Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:32.309229 containerd[1990]: time="2026-01-24T00:33:32.308949607Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:33:32.309581 containerd[1990]: time="2026-01-24T00:33:32.309535973Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:33:32.310440 kubelet[3194]: E0124 00:33:32.309810 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:33:32.310440 kubelet[3194]: E0124 00:33:32.309868 3194 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:33:32.310440 kubelet[3194]: E0124 00:33:32.310015 3194 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q86vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74b94c78c8-p5fzt_calico-system(d98daf60-e1b2-4bcf-bf77-7fe1f3510929): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:32.311567 kubelet[3194]: E0124 00:33:32.311510 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74b94c78c8-p5fzt" podUID="d98daf60-e1b2-4bcf-bf77-7fe1f3510929" Jan 24 00:33:33.223376 kubelet[3194]: E0124 00:33:33.222517 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74b94c78c8-p5fzt" podUID="d98daf60-e1b2-4bcf-bf77-7fe1f3510929" Jan 24 00:33:33.835838 systemd[1]: Started sshd@9-172.31.16.136:22-4.153.228.146:55754.service - OpenSSH per-connection server daemon (4.153.228.146:55754). Jan 24 00:33:34.046789 ntpd[1955]: Listen normally on 9 calibbb1761c22f [fe80::ecee:eeff:feee:eeee%7]:123 Jan 24 00:33:34.046857 ntpd[1955]: Listen normally on 10 calif2ff9cc824b [fe80::ecee:eeff:feee:eeee%8]:123 Jan 24 00:33:34.047227 ntpd[1955]: 24 Jan 00:33:34 ntpd[1955]: Listen normally on 9 calibbb1761c22f [fe80::ecee:eeff:feee:eeee%7]:123 Jan 24 00:33:34.047227 ntpd[1955]: 24 Jan 00:33:34 ntpd[1955]: Listen normally on 10 calif2ff9cc824b [fe80::ecee:eeff:feee:eeee%8]:123 Jan 24 00:33:34.047227 ntpd[1955]: 24 Jan 00:33:34 ntpd[1955]: Listen normally on 11 calib7619293cb5 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 24 00:33:34.047227 ntpd[1955]: 24 Jan 00:33:34 ntpd[1955]: Listen normally on 12 cali26dd29f8fc9 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 24 00:33:34.047227 ntpd[1955]: 24 Jan 00:33:34 ntpd[1955]: Listen normally on 13 cali588e7564342 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 24 00:33:34.047227 ntpd[1955]: 24 Jan 00:33:34 ntpd[1955]: Listen normally on 14 cali49598f0f84a [fe80::ecee:eeff:feee:eeee%12]:123 Jan 24 00:33:34.047227 ntpd[1955]: 24 Jan 00:33:34 ntpd[1955]: Listen normally on 15 cali620e5cd1b75 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 24 00:33:34.047227 ntpd[1955]: 24 Jan 00:33:34 ntpd[1955]: Listen normally on 16 cali33d590ef719 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 24 00:33:34.046894 ntpd[1955]: Listen normally on 11 calib7619293cb5 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 24 00:33:34.046924 ntpd[1955]: Listen normally on 12 cali26dd29f8fc9 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 24 00:33:34.046951 ntpd[1955]: Listen normally on 13 cali588e7564342 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 24 00:33:34.046979 ntpd[1955]: Listen normally on 14 cali49598f0f84a [fe80::ecee:eeff:feee:eeee%12]:123 Jan 24 00:33:34.047006 ntpd[1955]: Listen normally on 15 cali620e5cd1b75 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 24 00:33:34.047035 ntpd[1955]: Listen normally on 16 cali33d590ef719 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 24 00:33:34.404467 sshd[5691]: Accepted publickey for core from 4.153.228.146 port 55754 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:33:34.407378 sshd[5691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:33:34.415958 systemd-logind[1960]: New session 10 of user core. Jan 24 00:33:34.422227 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 00:33:35.090838 sshd[5691]: pam_unix(sshd:session): session closed for user core Jan 24 00:33:35.095945 systemd[1]: sshd@9-172.31.16.136:22-4.153.228.146:55754.service: Deactivated successfully. Jan 24 00:33:35.098132 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 00:33:35.099287 systemd-logind[1960]: Session 10 logged out. Waiting for processes to exit. Jan 24 00:33:35.100285 systemd-logind[1960]: Removed session 10. Jan 24 00:33:35.176462 systemd[1]: Started sshd@10-172.31.16.136:22-4.153.228.146:38932.service - OpenSSH per-connection server daemon (4.153.228.146:38932). Jan 24 00:33:35.668293 sshd[5706]: Accepted publickey for core from 4.153.228.146 port 38932 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:33:35.669745 sshd[5706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:33:35.674484 systemd-logind[1960]: New session 11 of user core. Jan 24 00:33:35.683401 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 00:33:36.273935 sshd[5706]: pam_unix(sshd:session): session closed for user core Jan 24 00:33:36.278593 systemd[1]: sshd@10-172.31.16.136:22-4.153.228.146:38932.service: Deactivated successfully. Jan 24 00:33:36.281458 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 00:33:36.283198 systemd-logind[1960]: Session 11 logged out. Waiting for processes to exit. Jan 24 00:33:36.285610 systemd-logind[1960]: Removed session 11. Jan 24 00:33:36.362462 systemd[1]: Started sshd@11-172.31.16.136:22-4.153.228.146:38944.service - OpenSSH per-connection server daemon (4.153.228.146:38944). Jan 24 00:33:36.855799 sshd[5719]: Accepted publickey for core from 4.153.228.146 port 38944 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:33:36.857515 sshd[5719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:33:36.862650 systemd-logind[1960]: New session 12 of user core. Jan 24 00:33:36.868332 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 00:33:37.313719 sshd[5719]: pam_unix(sshd:session): session closed for user core Jan 24 00:33:37.317649 systemd[1]: sshd@11-172.31.16.136:22-4.153.228.146:38944.service: Deactivated successfully. Jan 24 00:33:37.319853 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 00:33:37.320567 systemd-logind[1960]: Session 12 logged out. Waiting for processes to exit. Jan 24 00:33:37.322113 systemd-logind[1960]: Removed session 12. Jan 24 00:33:39.664904 update_engine[1961]: I20260124 00:33:39.664764 1961 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 24 00:33:39.664904 update_engine[1961]: I20260124 00:33:39.664836 1961 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 24 00:33:39.667292 update_engine[1961]: I20260124 00:33:39.667256 1961 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 24 00:33:39.668031 update_engine[1961]: I20260124 00:33:39.668004 1961 omaha_request_params.cc:62] Current group set to lts Jan 24 00:33:39.668163 update_engine[1961]: I20260124 00:33:39.668128 1961 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 24 00:33:39.668163 update_engine[1961]: I20260124 00:33:39.668155 1961 update_attempter.cc:643] Scheduling an action processor start. Jan 24 00:33:39.668221 update_engine[1961]: I20260124 00:33:39.668175 1961 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 24 00:33:39.668221 update_engine[1961]: I20260124 00:33:39.668210 1961 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 24 00:33:39.668381 update_engine[1961]: I20260124 00:33:39.668263 1961 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 24 00:33:39.668381 update_engine[1961]: I20260124 00:33:39.668274 1961 omaha_request_action.cc:272] Request: Jan 24 00:33:39.668381 update_engine[1961]: Jan 24 00:33:39.668381 update_engine[1961]: Jan 24 00:33:39.668381 update_engine[1961]: Jan 24 00:33:39.668381 update_engine[1961]: Jan 24 00:33:39.668381 update_engine[1961]: Jan 24 00:33:39.668381 update_engine[1961]: Jan 24 00:33:39.668381 update_engine[1961]: Jan 24 00:33:39.668381 update_engine[1961]: Jan 24 00:33:39.668381 update_engine[1961]: I20260124 00:33:39.668282 1961 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:33:39.680737 update_engine[1961]: I20260124 00:33:39.680362 1961 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:33:39.680737 update_engine[1961]: I20260124 00:33:39.680658 1961 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:33:39.685409 locksmithd[1993]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 24 00:33:39.695206 update_engine[1961]: E20260124 00:33:39.695113 1961 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 24 00:33:39.695308 update_engine[1961]: I20260124 00:33:39.695250 1961 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 24 00:33:42.418497 systemd[1]: Started sshd@12-172.31.16.136:22-4.153.228.146:38952.service - OpenSSH per-connection server daemon (4.153.228.146:38952). Jan 24 00:33:42.953703 sshd[5753]: Accepted publickey for core from 4.153.228.146 port 38952 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:33:42.955130 sshd[5753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:33:42.960425 systemd-logind[1960]: New session 13 of user core. Jan 24 00:33:42.967366 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 00:33:43.458087 sshd[5753]: pam_unix(sshd:session): session closed for user core Jan 24 00:33:43.465927 systemd[1]: sshd@12-172.31.16.136:22-4.153.228.146:38952.service: Deactivated successfully. Jan 24 00:33:43.468872 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 00:33:43.471163 systemd-logind[1960]: Session 13 logged out. Waiting for processes to exit. Jan 24 00:33:43.472672 systemd-logind[1960]: Removed session 13. Jan 24 00:33:43.605502 containerd[1990]: time="2026-01-24T00:33:43.605426523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:33:43.893248 containerd[1990]: time="2026-01-24T00:33:43.892959323Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:43.895203 containerd[1990]: time="2026-01-24T00:33:43.895152046Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:33:43.895371 containerd[1990]: time="2026-01-24T00:33:43.895170475Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:33:43.895420 kubelet[3194]: E0124 00:33:43.895384 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:33:43.895750 kubelet[3194]: E0124 00:33:43.895428 3194 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:33:43.895750 kubelet[3194]: E0124 00:33:43.895645 3194 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zr42q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59955b8999-49mgw_calico-apiserver(44356a3b-6e7e-4852-a5bd-fffe6e033ca3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:43.896255 containerd[1990]: time="2026-01-24T00:33:43.896227008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:33:43.897750 kubelet[3194]: E0124 00:33:43.897597 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59955b8999-49mgw" podUID="44356a3b-6e7e-4852-a5bd-fffe6e033ca3" Jan 24 00:33:44.145653 containerd[1990]: time="2026-01-24T00:33:44.145520320Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:44.147784 containerd[1990]: time="2026-01-24T00:33:44.147732653Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:33:44.147900 containerd[1990]: time="2026-01-24T00:33:44.147818005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:33:44.148151 kubelet[3194]: E0124 00:33:44.148073 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:33:44.148263 kubelet[3194]: E0124 00:33:44.148158 3194 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:33:44.148694 kubelet[3194]: E0124 00:33:44.148626 3194 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jr4vw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-777f8fb74-d8qgp_calico-system(0c7963cb-5f76-453d-b9ca-f28ed3f17ce0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:44.149845 kubelet[3194]: E0124 00:33:44.149802 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-777f8fb74-d8qgp" podUID="0c7963cb-5f76-453d-b9ca-f28ed3f17ce0" Jan 24 00:33:44.605374 containerd[1990]: time="2026-01-24T00:33:44.605311534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:33:44.878480 containerd[1990]: time="2026-01-24T00:33:44.878349776Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:44.880411 containerd[1990]: time="2026-01-24T00:33:44.880360568Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:33:44.880939 containerd[1990]: time="2026-01-24T00:33:44.880536408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:33:44.881175 kubelet[3194]: E0124 00:33:44.880692 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:33:44.881175 kubelet[3194]: E0124 00:33:44.880734 3194 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:33:44.881175 kubelet[3194]: E0124 00:33:44.880870 3194 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bs2mk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-t69bh_calico-system(2953039e-0e7f-4027-9a3c-137a03fa2153): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:44.882445 kubelet[3194]: E0124 00:33:44.882396 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t69bh" podUID="2953039e-0e7f-4027-9a3c-137a03fa2153" Jan 24 00:33:45.069970 systemd[1]: run-containerd-runc-k8s.io-9e76c446f2a60ec59fcc7cb3dd91f46a552d87fbdf03e4641141e50bcd9737d4-runc.c6v1Bj.mount: Deactivated successfully. Jan 24 00:33:45.608290 containerd[1990]: time="2026-01-24T00:33:45.608082567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:33:45.878906 containerd[1990]: time="2026-01-24T00:33:45.878780684Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:45.881054 containerd[1990]: time="2026-01-24T00:33:45.880912495Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:33:45.881200 containerd[1990]: time="2026-01-24T00:33:45.880924716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:33:45.881330 kubelet[3194]: E0124 00:33:45.881292 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:33:45.881616 kubelet[3194]: E0124 00:33:45.881339 3194 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:33:45.881616 kubelet[3194]: E0124 00:33:45.881531 3194 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e19c64a1732d4616946f432813c0113d,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q86vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74b94c78c8-p5fzt_calico-system(d98daf60-e1b2-4bcf-bf77-7fe1f3510929): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:45.882198 containerd[1990]: time="2026-01-24T00:33:45.882172894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:33:46.143216 containerd[1990]: time="2026-01-24T00:33:46.143062699Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:46.145324 containerd[1990]: time="2026-01-24T00:33:46.145214060Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:33:46.145324 containerd[1990]: time="2026-01-24T00:33:46.145267415Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:33:46.145639 kubelet[3194]: E0124 00:33:46.145600 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:33:46.146217 kubelet[3194]: E0124 00:33:46.145645 3194 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:33:46.146217 kubelet[3194]: E0124 00:33:46.145894 3194 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8bzv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zgckl_calico-system(6dea86f8-2783-4942-8476-4f769af7b22d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:46.146494 containerd[1990]: time="2026-01-24T00:33:46.145927530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:33:46.441041 containerd[1990]: time="2026-01-24T00:33:46.440920953Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:46.443198 containerd[1990]: time="2026-01-24T00:33:46.443123515Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:33:46.443368 containerd[1990]: time="2026-01-24T00:33:46.443238235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:33:46.443433 kubelet[3194]: E0124 00:33:46.443376 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:33:46.443433 kubelet[3194]: E0124 00:33:46.443421 3194 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:33:46.443706 kubelet[3194]: E0124 00:33:46.443667 3194 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q86vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74b94c78c8-p5fzt_calico-system(d98daf60-e1b2-4bcf-bf77-7fe1f3510929): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:46.444416 containerd[1990]: time="2026-01-24T00:33:46.444338028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:33:46.445104 kubelet[3194]: E0124 00:33:46.444924 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74b94c78c8-p5fzt" podUID="d98daf60-e1b2-4bcf-bf77-7fe1f3510929" Jan 24 00:33:46.698432 containerd[1990]: time="2026-01-24T00:33:46.698311702Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:46.700482 containerd[1990]: time="2026-01-24T00:33:46.700374170Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:33:46.700482 containerd[1990]: time="2026-01-24T00:33:46.700427851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:33:46.700632 kubelet[3194]: E0124 00:33:46.700598 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:33:46.700712 kubelet[3194]: E0124 00:33:46.700647 3194 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:33:46.701168 kubelet[3194]: E0124 00:33:46.700832 3194 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8bzv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zgckl_calico-system(6dea86f8-2783-4942-8476-4f769af7b22d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:46.701518 containerd[1990]: time="2026-01-24T00:33:46.701483637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:33:46.704655 kubelet[3194]: E0124 00:33:46.704577 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zgckl" podUID="6dea86f8-2783-4942-8476-4f769af7b22d" Jan 24 00:33:46.950179 containerd[1990]: time="2026-01-24T00:33:46.950049741Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:33:46.952282 containerd[1990]: time="2026-01-24T00:33:46.952224040Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:33:46.952465 containerd[1990]: time="2026-01-24T00:33:46.952245206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:33:46.952515 kubelet[3194]: E0124 00:33:46.952461 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:33:46.952515 kubelet[3194]: E0124 00:33:46.952507 3194 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:33:46.952847 kubelet[3194]: E0124 00:33:46.952636 3194 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j7djh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59955b8999-7njzz_calico-apiserver(8bd1b1e2-6c2f-496d-84df-3687f4a4a992): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:33:46.954250 kubelet[3194]: E0124 00:33:46.954209 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59955b8999-7njzz" podUID="8bd1b1e2-6c2f-496d-84df-3687f4a4a992" Jan 24 00:33:48.543462 systemd[1]: Started sshd@13-172.31.16.136:22-4.153.228.146:46572.service - OpenSSH per-connection server daemon (4.153.228.146:46572). Jan 24 00:33:49.048831 sshd[5786]: Accepted publickey for core from 4.153.228.146 port 46572 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:33:49.053384 sshd[5786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:33:49.062096 systemd-logind[1960]: New session 14 of user core. Jan 24 00:33:49.066366 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 00:33:49.624844 update_engine[1961]: I20260124 00:33:49.624177 1961 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:33:49.624844 update_engine[1961]: I20260124 00:33:49.624411 1961 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:33:49.624844 update_engine[1961]: I20260124 00:33:49.624624 1961 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:33:49.626067 update_engine[1961]: E20260124 00:33:49.626015 1961 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 24 00:33:49.626169 update_engine[1961]: I20260124 00:33:49.626084 1961 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 24 00:33:50.256160 sshd[5786]: pam_unix(sshd:session): session closed for user core Jan 24 00:33:50.263178 systemd[1]: sshd@13-172.31.16.136:22-4.153.228.146:46572.service: Deactivated successfully. Jan 24 00:33:50.267935 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 00:33:50.270517 systemd-logind[1960]: Session 14 logged out. Waiting for processes to exit. Jan 24 00:33:50.271704 systemd-logind[1960]: Removed session 14. Jan 24 00:33:54.605458 kubelet[3194]: E0124 00:33:54.605403 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-777f8fb74-d8qgp" podUID="0c7963cb-5f76-453d-b9ca-f28ed3f17ce0" Jan 24 00:33:55.365537 systemd[1]: Started sshd@14-172.31.16.136:22-4.153.228.146:43044.service - OpenSSH per-connection server daemon (4.153.228.146:43044). Jan 24 00:33:55.610234 kubelet[3194]: E0124 00:33:55.609677 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59955b8999-49mgw" podUID="44356a3b-6e7e-4852-a5bd-fffe6e033ca3" Jan 24 00:33:55.963258 sshd[5801]: Accepted publickey for core from 4.153.228.146 port 43044 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:33:55.966564 sshd[5801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:33:55.973702 systemd-logind[1960]: New session 15 of user core. Jan 24 00:33:55.981475 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 00:33:56.819404 sshd[5801]: pam_unix(sshd:session): session closed for user core Jan 24 00:33:56.824801 systemd[1]: sshd@14-172.31.16.136:22-4.153.228.146:43044.service: Deactivated successfully. Jan 24 00:33:56.831192 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 00:33:56.834434 systemd-logind[1960]: Session 15 logged out. Waiting for processes to exit. Jan 24 00:33:56.837030 systemd-logind[1960]: Removed session 15. Jan 24 00:33:57.609024 kubelet[3194]: E0124 00:33:57.608964 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t69bh" podUID="2953039e-0e7f-4027-9a3c-137a03fa2153" Jan 24 00:33:57.613401 kubelet[3194]: E0124 00:33:57.613352 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74b94c78c8-p5fzt" podUID="d98daf60-e1b2-4bcf-bf77-7fe1f3510929" Jan 24 00:33:59.588087 containerd[1990]: time="2026-01-24T00:33:59.588042299Z" level=info msg="StopPodSandbox for \"0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e\"" Jan 24 00:33:59.610169 kubelet[3194]: E0124 00:33:59.609765 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59955b8999-7njzz" podUID="8bd1b1e2-6c2f-496d-84df-3687f4a4a992" Jan 24 00:33:59.610169 kubelet[3194]: E0124 00:33:59.609912 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zgckl" podUID="6dea86f8-2783-4942-8476-4f769af7b22d" Jan 24 00:33:59.627207 update_engine[1961]: I20260124 00:33:59.626302 1961 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:33:59.627207 update_engine[1961]: I20260124 00:33:59.626509 1961 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:33:59.627207 update_engine[1961]: I20260124 00:33:59.626739 1961 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:33:59.630295 update_engine[1961]: E20260124 00:33:59.630206 1961 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 24 00:33:59.630295 update_engine[1961]: I20260124 00:33:59.630272 1961 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 24 00:34:00.226835 containerd[1990]: 2026-01-24 00:34:00.123 [WARNING][5825] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--136-k8s-coredns--674b8bbfcf--vpf5l-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8d604c20-234e-4790-af46-e3ccb6ebbab2", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-136", ContainerID:"5b942064ddb4524e1b9ed9fa151771316a65a0214b8aa0d8040c8281f52d70da", Pod:"coredns-674b8bbfcf-vpf5l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2ff9cc824b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:34:00.226835 containerd[1990]: 2026-01-24 00:34:00.125 [INFO][5825] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" Jan 24 00:34:00.226835 containerd[1990]: 2026-01-24 00:34:00.125 [INFO][5825] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" iface="eth0" netns="" Jan 24 00:34:00.226835 containerd[1990]: 2026-01-24 00:34:00.125 [INFO][5825] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" Jan 24 00:34:00.226835 containerd[1990]: 2026-01-24 00:34:00.125 [INFO][5825] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" Jan 24 00:34:00.226835 containerd[1990]: 2026-01-24 00:34:00.184 [INFO][5833] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" HandleID="k8s-pod-network.0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" Workload="ip--172--31--16--136-k8s-coredns--674b8bbfcf--vpf5l-eth0" Jan 24 00:34:00.226835 containerd[1990]: 2026-01-24 00:34:00.184 [INFO][5833] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:34:00.226835 containerd[1990]: 2026-01-24 00:34:00.184 [INFO][5833] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:34:00.226835 containerd[1990]: 2026-01-24 00:34:00.206 [WARNING][5833] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" HandleID="k8s-pod-network.0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" Workload="ip--172--31--16--136-k8s-coredns--674b8bbfcf--vpf5l-eth0" Jan 24 00:34:00.226835 containerd[1990]: 2026-01-24 00:34:00.206 [INFO][5833] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" HandleID="k8s-pod-network.0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" Workload="ip--172--31--16--136-k8s-coredns--674b8bbfcf--vpf5l-eth0" Jan 24 00:34:00.226835 containerd[1990]: 2026-01-24 00:34:00.215 [INFO][5833] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:34:00.226835 containerd[1990]: 2026-01-24 00:34:00.220 [INFO][5825] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" Jan 24 00:34:00.228264 containerd[1990]: time="2026-01-24T00:34:00.226903166Z" level=info msg="TearDown network for sandbox \"0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e\" successfully" Jan 24 00:34:00.228264 containerd[1990]: time="2026-01-24T00:34:00.226936513Z" level=info msg="StopPodSandbox for \"0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e\" returns successfully" Jan 24 00:34:00.284693 containerd[1990]: time="2026-01-24T00:34:00.284620849Z" level=info msg="RemovePodSandbox for \"0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e\"" Jan 24 00:34:00.288392 containerd[1990]: time="2026-01-24T00:34:00.288331486Z" level=info msg="Forcibly stopping sandbox \"0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e\"" Jan 24 00:34:00.514638 containerd[1990]: 2026-01-24 00:34:00.408 [WARNING][5848] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--136-k8s-coredns--674b8bbfcf--vpf5l-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8d604c20-234e-4790-af46-e3ccb6ebbab2", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-136", ContainerID:"5b942064ddb4524e1b9ed9fa151771316a65a0214b8aa0d8040c8281f52d70da", Pod:"coredns-674b8bbfcf-vpf5l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2ff9cc824b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:34:00.514638 containerd[1990]: 2026-01-24 00:34:00.408 [INFO][5848] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" Jan 24 00:34:00.514638 containerd[1990]: 2026-01-24 00:34:00.408 [INFO][5848] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" iface="eth0" netns="" Jan 24 00:34:00.514638 containerd[1990]: 2026-01-24 00:34:00.408 [INFO][5848] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" Jan 24 00:34:00.514638 containerd[1990]: 2026-01-24 00:34:00.408 [INFO][5848] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" Jan 24 00:34:00.514638 containerd[1990]: 2026-01-24 00:34:00.492 [INFO][5855] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" HandleID="k8s-pod-network.0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" Workload="ip--172--31--16--136-k8s-coredns--674b8bbfcf--vpf5l-eth0" Jan 24 00:34:00.514638 containerd[1990]: 2026-01-24 00:34:00.492 [INFO][5855] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:34:00.514638 containerd[1990]: 2026-01-24 00:34:00.492 [INFO][5855] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:34:00.514638 containerd[1990]: 2026-01-24 00:34:00.505 [WARNING][5855] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" HandleID="k8s-pod-network.0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" Workload="ip--172--31--16--136-k8s-coredns--674b8bbfcf--vpf5l-eth0" Jan 24 00:34:00.514638 containerd[1990]: 2026-01-24 00:34:00.505 [INFO][5855] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" HandleID="k8s-pod-network.0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" Workload="ip--172--31--16--136-k8s-coredns--674b8bbfcf--vpf5l-eth0" Jan 24 00:34:00.514638 containerd[1990]: 2026-01-24 00:34:00.507 [INFO][5855] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:34:00.514638 containerd[1990]: 2026-01-24 00:34:00.510 [INFO][5848] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e" Jan 24 00:34:00.514638 containerd[1990]: time="2026-01-24T00:34:00.514336516Z" level=info msg="TearDown network for sandbox \"0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e\" successfully" Jan 24 00:34:00.547554 containerd[1990]: time="2026-01-24T00:34:00.547483652Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:34:00.547727 containerd[1990]: time="2026-01-24T00:34:00.547590884Z" level=info msg="RemovePodSandbox \"0f5e73f7e50d6efd410a6142da578a0e966a0aaea44115bdbde00376a6eb7e4e\" returns successfully" Jan 24 00:34:00.555385 containerd[1990]: time="2026-01-24T00:34:00.555340703Z" level=info msg="StopPodSandbox for \"8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69\"" Jan 24 00:34:00.665233 containerd[1990]: 2026-01-24 00:34:00.603 [WARNING][5871] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--136-k8s-goldmane--666569f655--t69bh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"2953039e-0e7f-4027-9a3c-137a03fa2153", ResourceVersion:"1328", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-136", ContainerID:"15983daa770ec5a5acc761ed6e6dc71285b7dde1f05fc39a422ac056831422e9", Pod:"goldmane-666569f655-t69bh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.10.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali620e5cd1b75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:34:00.665233 containerd[1990]: 2026-01-24 00:34:00.605 [INFO][5871] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" Jan 24 00:34:00.665233 containerd[1990]: 2026-01-24 00:34:00.606 [INFO][5871] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" iface="eth0" netns="" Jan 24 00:34:00.665233 containerd[1990]: 2026-01-24 00:34:00.606 [INFO][5871] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" Jan 24 00:34:00.665233 containerd[1990]: 2026-01-24 00:34:00.606 [INFO][5871] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" Jan 24 00:34:00.665233 containerd[1990]: 2026-01-24 00:34:00.646 [INFO][5879] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" HandleID="k8s-pod-network.8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" Workload="ip--172--31--16--136-k8s-goldmane--666569f655--t69bh-eth0" Jan 24 00:34:00.665233 containerd[1990]: 2026-01-24 00:34:00.647 [INFO][5879] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:34:00.665233 containerd[1990]: 2026-01-24 00:34:00.647 [INFO][5879] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:34:00.665233 containerd[1990]: 2026-01-24 00:34:00.655 [WARNING][5879] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" HandleID="k8s-pod-network.8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" Workload="ip--172--31--16--136-k8s-goldmane--666569f655--t69bh-eth0" Jan 24 00:34:00.665233 containerd[1990]: 2026-01-24 00:34:00.655 [INFO][5879] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" HandleID="k8s-pod-network.8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" Workload="ip--172--31--16--136-k8s-goldmane--666569f655--t69bh-eth0" Jan 24 00:34:00.665233 containerd[1990]: 2026-01-24 00:34:00.657 [INFO][5879] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:34:00.665233 containerd[1990]: 2026-01-24 00:34:00.661 [INFO][5871] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" Jan 24 00:34:00.667866 containerd[1990]: time="2026-01-24T00:34:00.665259503Z" level=info msg="TearDown network for sandbox \"8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69\" successfully" Jan 24 00:34:00.667866 containerd[1990]: time="2026-01-24T00:34:00.665287310Z" level=info msg="StopPodSandbox for \"8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69\" returns successfully" Jan 24 00:34:00.667866 containerd[1990]: time="2026-01-24T00:34:00.665858417Z" level=info msg="RemovePodSandbox for \"8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69\"" Jan 24 00:34:00.667866 containerd[1990]: time="2026-01-24T00:34:00.665884853Z" level=info msg="Forcibly stopping sandbox \"8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69\"" Jan 24 00:34:00.752250 containerd[1990]: 2026-01-24 00:34:00.707 [WARNING][5899] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--136-k8s-goldmane--666569f655--t69bh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"2953039e-0e7f-4027-9a3c-137a03fa2153", ResourceVersion:"1328", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-136", ContainerID:"15983daa770ec5a5acc761ed6e6dc71285b7dde1f05fc39a422ac056831422e9", Pod:"goldmane-666569f655-t69bh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.10.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali620e5cd1b75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:34:00.752250 containerd[1990]: 2026-01-24 00:34:00.707 [INFO][5899] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" Jan 24 00:34:00.752250 containerd[1990]: 2026-01-24 00:34:00.707 [INFO][5899] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" iface="eth0" netns="" Jan 24 00:34:00.752250 containerd[1990]: 2026-01-24 00:34:00.707 [INFO][5899] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" Jan 24 00:34:00.752250 containerd[1990]: 2026-01-24 00:34:00.707 [INFO][5899] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" Jan 24 00:34:00.752250 containerd[1990]: 2026-01-24 00:34:00.737 [INFO][5907] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" HandleID="k8s-pod-network.8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" Workload="ip--172--31--16--136-k8s-goldmane--666569f655--t69bh-eth0" Jan 24 00:34:00.752250 containerd[1990]: 2026-01-24 00:34:00.738 [INFO][5907] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:34:00.752250 containerd[1990]: 2026-01-24 00:34:00.738 [INFO][5907] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:34:00.752250 containerd[1990]: 2026-01-24 00:34:00.744 [WARNING][5907] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" HandleID="k8s-pod-network.8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" Workload="ip--172--31--16--136-k8s-goldmane--666569f655--t69bh-eth0" Jan 24 00:34:00.752250 containerd[1990]: 2026-01-24 00:34:00.744 [INFO][5907] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" HandleID="k8s-pod-network.8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" Workload="ip--172--31--16--136-k8s-goldmane--666569f655--t69bh-eth0" Jan 24 00:34:00.752250 containerd[1990]: 2026-01-24 00:34:00.746 [INFO][5907] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:34:00.752250 containerd[1990]: 2026-01-24 00:34:00.748 [INFO][5899] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69" Jan 24 00:34:00.752250 containerd[1990]: time="2026-01-24T00:34:00.750506414Z" level=info msg="TearDown network for sandbox \"8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69\" successfully" Jan 24 00:34:00.762482 containerd[1990]: time="2026-01-24T00:34:00.762434361Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:34:00.762741 containerd[1990]: time="2026-01-24T00:34:00.762719275Z" level=info msg="RemovePodSandbox \"8739c84bdab30e6d6de4655bd375b879b2aa8079866eec5f3fac8f2c66c69c69\" returns successfully" Jan 24 00:34:00.763728 containerd[1990]: time="2026-01-24T00:34:00.763352810Z" level=info msg="StopPodSandbox for \"fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28\"" Jan 24 00:34:00.922929 containerd[1990]: 2026-01-24 00:34:00.862 [WARNING][5922] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--136-k8s-calico--apiserver--59955b8999--49mgw-eth0", GenerateName:"calico-apiserver-59955b8999-", Namespace:"calico-apiserver", SelfLink:"", UID:"44356a3b-6e7e-4852-a5bd-fffe6e033ca3", ResourceVersion:"1314", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59955b8999", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-136", ContainerID:"10d3f659a483a654ce16debdc7e16476c66622df2e8565813df97820864bd97e", Pod:"calico-apiserver-59955b8999-49mgw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali588e7564342", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:34:00.922929 containerd[1990]: 2026-01-24 00:34:00.863 [INFO][5922] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" Jan 24 00:34:00.922929 containerd[1990]: 2026-01-24 00:34:00.863 [INFO][5922] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" iface="eth0" netns="" Jan 24 00:34:00.922929 containerd[1990]: 2026-01-24 00:34:00.863 [INFO][5922] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" Jan 24 00:34:00.922929 containerd[1990]: 2026-01-24 00:34:00.863 [INFO][5922] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" Jan 24 00:34:00.922929 containerd[1990]: 2026-01-24 00:34:00.901 [INFO][5930] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" HandleID="k8s-pod-network.fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" Workload="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--49mgw-eth0" Jan 24 00:34:00.922929 containerd[1990]: 2026-01-24 00:34:00.901 [INFO][5930] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:34:00.922929 containerd[1990]: 2026-01-24 00:34:00.901 [INFO][5930] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:34:00.922929 containerd[1990]: 2026-01-24 00:34:00.914 [WARNING][5930] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" HandleID="k8s-pod-network.fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" Workload="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--49mgw-eth0" Jan 24 00:34:00.922929 containerd[1990]: 2026-01-24 00:34:00.914 [INFO][5930] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" HandleID="k8s-pod-network.fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" Workload="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--49mgw-eth0" Jan 24 00:34:00.922929 containerd[1990]: 2026-01-24 00:34:00.917 [INFO][5930] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:34:00.922929 containerd[1990]: 2026-01-24 00:34:00.920 [INFO][5922] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" Jan 24 00:34:00.924396 containerd[1990]: time="2026-01-24T00:34:00.923386358Z" level=info msg="TearDown network for sandbox \"fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28\" successfully" Jan 24 00:34:00.924396 containerd[1990]: time="2026-01-24T00:34:00.923419461Z" level=info msg="StopPodSandbox for \"fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28\" returns successfully" Jan 24 00:34:00.926523 containerd[1990]: time="2026-01-24T00:34:00.925983729Z" level=info msg="RemovePodSandbox for \"fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28\"" Jan 24 00:34:00.926523 containerd[1990]: time="2026-01-24T00:34:00.926049922Z" level=info msg="Forcibly stopping sandbox \"fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28\"" Jan 24 00:34:01.091321 containerd[1990]: 2026-01-24 00:34:00.989 [WARNING][5944] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--136-k8s-calico--apiserver--59955b8999--49mgw-eth0", GenerateName:"calico-apiserver-59955b8999-", Namespace:"calico-apiserver", SelfLink:"", UID:"44356a3b-6e7e-4852-a5bd-fffe6e033ca3", ResourceVersion:"1314", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59955b8999", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-136", ContainerID:"10d3f659a483a654ce16debdc7e16476c66622df2e8565813df97820864bd97e", Pod:"calico-apiserver-59955b8999-49mgw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali588e7564342", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:34:01.091321 containerd[1990]: 2026-01-24 00:34:00.989 [INFO][5944] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" Jan 24 00:34:01.091321 containerd[1990]: 2026-01-24 00:34:00.989 [INFO][5944] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" iface="eth0" netns="" Jan 24 00:34:01.091321 containerd[1990]: 2026-01-24 00:34:00.989 [INFO][5944] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" Jan 24 00:34:01.091321 containerd[1990]: 2026-01-24 00:34:00.990 [INFO][5944] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" Jan 24 00:34:01.091321 containerd[1990]: 2026-01-24 00:34:01.056 [INFO][5951] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" HandleID="k8s-pod-network.fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" Workload="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--49mgw-eth0" Jan 24 00:34:01.091321 containerd[1990]: 2026-01-24 00:34:01.058 [INFO][5951] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:34:01.091321 containerd[1990]: 2026-01-24 00:34:01.058 [INFO][5951] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:34:01.091321 containerd[1990]: 2026-01-24 00:34:01.079 [WARNING][5951] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" HandleID="k8s-pod-network.fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" Workload="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--49mgw-eth0" Jan 24 00:34:01.091321 containerd[1990]: 2026-01-24 00:34:01.079 [INFO][5951] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" HandleID="k8s-pod-network.fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" Workload="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--49mgw-eth0" Jan 24 00:34:01.091321 containerd[1990]: 2026-01-24 00:34:01.082 [INFO][5951] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:34:01.091321 containerd[1990]: 2026-01-24 00:34:01.086 [INFO][5944] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28" Jan 24 00:34:01.091321 containerd[1990]: time="2026-01-24T00:34:01.091234696Z" level=info msg="TearDown network for sandbox \"fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28\" successfully" Jan 24 00:34:01.113994 containerd[1990]: time="2026-01-24T00:34:01.110936710Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:34:01.113994 containerd[1990]: time="2026-01-24T00:34:01.111019971Z" level=info msg="RemovePodSandbox \"fade732390888ce05bdd19d82fd3dbed2d3874684485a0f5885cfe54521c9c28\" returns successfully" Jan 24 00:34:01.113994 containerd[1990]: time="2026-01-24T00:34:01.113642677Z" level=info msg="StopPodSandbox for \"d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c\"" Jan 24 00:34:01.295706 containerd[1990]: 2026-01-24 00:34:01.202 [WARNING][5965] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--136-k8s-csi--node--driver--zgckl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6dea86f8-2783-4942-8476-4f769af7b22d", ResourceVersion:"1346", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-136", ContainerID:"1b6a41a8b6a3bb9b304f8167da84676c61bd90647a1210dd4ea604186347bfe7", Pod:"csi-node-driver-zgckl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.10.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali49598f0f84a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:34:01.295706 containerd[1990]: 2026-01-24 00:34:01.202 [INFO][5965] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" Jan 24 00:34:01.295706 containerd[1990]: 2026-01-24 00:34:01.202 [INFO][5965] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" iface="eth0" netns="" Jan 24 00:34:01.295706 containerd[1990]: 2026-01-24 00:34:01.202 [INFO][5965] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" Jan 24 00:34:01.295706 containerd[1990]: 2026-01-24 00:34:01.202 [INFO][5965] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" Jan 24 00:34:01.295706 containerd[1990]: 2026-01-24 00:34:01.258 [INFO][5972] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" HandleID="k8s-pod-network.d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" Workload="ip--172--31--16--136-k8s-csi--node--driver--zgckl-eth0" Jan 24 00:34:01.295706 containerd[1990]: 2026-01-24 00:34:01.260 [INFO][5972] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:34:01.295706 containerd[1990]: 2026-01-24 00:34:01.260 [INFO][5972] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:34:01.295706 containerd[1990]: 2026-01-24 00:34:01.277 [WARNING][5972] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" HandleID="k8s-pod-network.d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" Workload="ip--172--31--16--136-k8s-csi--node--driver--zgckl-eth0" Jan 24 00:34:01.295706 containerd[1990]: 2026-01-24 00:34:01.277 [INFO][5972] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" HandleID="k8s-pod-network.d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" Workload="ip--172--31--16--136-k8s-csi--node--driver--zgckl-eth0" Jan 24 00:34:01.295706 containerd[1990]: 2026-01-24 00:34:01.284 [INFO][5972] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:34:01.295706 containerd[1990]: 2026-01-24 00:34:01.289 [INFO][5965] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" Jan 24 00:34:01.295706 containerd[1990]: time="2026-01-24T00:34:01.295386139Z" level=info msg="TearDown network for sandbox \"d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c\" successfully" Jan 24 00:34:01.295706 containerd[1990]: time="2026-01-24T00:34:01.295413090Z" level=info msg="StopPodSandbox for \"d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c\" returns successfully" Jan 24 00:34:01.299021 containerd[1990]: time="2026-01-24T00:34:01.296809124Z" level=info msg="RemovePodSandbox for \"d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c\"" Jan 24 00:34:01.299021 containerd[1990]: time="2026-01-24T00:34:01.296840635Z" level=info msg="Forcibly stopping sandbox \"d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c\"" Jan 24 00:34:01.430936 containerd[1990]: 2026-01-24 00:34:01.373 [WARNING][5986] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--136-k8s-csi--node--driver--zgckl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6dea86f8-2783-4942-8476-4f769af7b22d", ResourceVersion:"1346", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-136", ContainerID:"1b6a41a8b6a3bb9b304f8167da84676c61bd90647a1210dd4ea604186347bfe7", Pod:"csi-node-driver-zgckl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.10.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali49598f0f84a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:34:01.430936 containerd[1990]: 2026-01-24 00:34:01.375 [INFO][5986] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" Jan 24 00:34:01.430936 containerd[1990]: 2026-01-24 00:34:01.376 [INFO][5986] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" iface="eth0" netns="" Jan 24 00:34:01.430936 containerd[1990]: 2026-01-24 00:34:01.376 [INFO][5986] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" Jan 24 00:34:01.430936 containerd[1990]: 2026-01-24 00:34:01.376 [INFO][5986] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" Jan 24 00:34:01.430936 containerd[1990]: 2026-01-24 00:34:01.412 [INFO][5993] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" HandleID="k8s-pod-network.d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" Workload="ip--172--31--16--136-k8s-csi--node--driver--zgckl-eth0" Jan 24 00:34:01.430936 containerd[1990]: 2026-01-24 00:34:01.412 [INFO][5993] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:34:01.430936 containerd[1990]: 2026-01-24 00:34:01.412 [INFO][5993] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:34:01.430936 containerd[1990]: 2026-01-24 00:34:01.423 [WARNING][5993] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" HandleID="k8s-pod-network.d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" Workload="ip--172--31--16--136-k8s-csi--node--driver--zgckl-eth0" Jan 24 00:34:01.430936 containerd[1990]: 2026-01-24 00:34:01.423 [INFO][5993] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" HandleID="k8s-pod-network.d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" Workload="ip--172--31--16--136-k8s-csi--node--driver--zgckl-eth0" Jan 24 00:34:01.430936 containerd[1990]: 2026-01-24 00:34:01.425 [INFO][5993] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:34:01.430936 containerd[1990]: 2026-01-24 00:34:01.428 [INFO][5986] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c" Jan 24 00:34:01.430936 containerd[1990]: time="2026-01-24T00:34:01.430738541Z" level=info msg="TearDown network for sandbox \"d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c\" successfully" Jan 24 00:34:01.439186 containerd[1990]: time="2026-01-24T00:34:01.438957263Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:34:01.439186 containerd[1990]: time="2026-01-24T00:34:01.439036634Z" level=info msg="RemovePodSandbox \"d35a61908cde8d40581de2dcad25449b07ec08c63813b0d8ae162a8d1733ba6c\" returns successfully" Jan 24 00:34:01.441190 containerd[1990]: time="2026-01-24T00:34:01.440265556Z" level=info msg="StopPodSandbox for \"8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35\"" Jan 24 00:34:01.544751 containerd[1990]: 2026-01-24 00:34:01.492 [WARNING][6007] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--136-k8s-coredns--674b8bbfcf--xk4gx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e6c717e9-efbf-49cc-b817-198682317a0f", ResourceVersion:"1147", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-136", ContainerID:"9a378e49ca35967023a744c58e897c8d63744827580cfa7763989d53f7a0f629", Pod:"coredns-674b8bbfcf-xk4gx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26dd29f8fc9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:34:01.544751 containerd[1990]: 2026-01-24 00:34:01.492 [INFO][6007] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" Jan 24 00:34:01.544751 containerd[1990]: 2026-01-24 00:34:01.492 [INFO][6007] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" iface="eth0" netns="" Jan 24 00:34:01.544751 containerd[1990]: 2026-01-24 00:34:01.492 [INFO][6007] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" Jan 24 00:34:01.544751 containerd[1990]: 2026-01-24 00:34:01.492 [INFO][6007] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" Jan 24 00:34:01.544751 containerd[1990]: 2026-01-24 00:34:01.528 [INFO][6014] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" HandleID="k8s-pod-network.8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" Workload="ip--172--31--16--136-k8s-coredns--674b8bbfcf--xk4gx-eth0" Jan 24 00:34:01.544751 containerd[1990]: 2026-01-24 00:34:01.528 [INFO][6014] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:34:01.544751 containerd[1990]: 2026-01-24 00:34:01.529 [INFO][6014] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:34:01.544751 containerd[1990]: 2026-01-24 00:34:01.538 [WARNING][6014] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" HandleID="k8s-pod-network.8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" Workload="ip--172--31--16--136-k8s-coredns--674b8bbfcf--xk4gx-eth0" Jan 24 00:34:01.544751 containerd[1990]: 2026-01-24 00:34:01.538 [INFO][6014] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" HandleID="k8s-pod-network.8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" Workload="ip--172--31--16--136-k8s-coredns--674b8bbfcf--xk4gx-eth0" Jan 24 00:34:01.544751 containerd[1990]: 2026-01-24 00:34:01.540 [INFO][6014] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:34:01.544751 containerd[1990]: 2026-01-24 00:34:01.542 [INFO][6007] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" Jan 24 00:34:01.545968 containerd[1990]: time="2026-01-24T00:34:01.545679737Z" level=info msg="TearDown network for sandbox \"8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35\" successfully" Jan 24 00:34:01.545968 containerd[1990]: time="2026-01-24T00:34:01.545720648Z" level=info msg="StopPodSandbox for \"8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35\" returns successfully" Jan 24 00:34:01.547732 containerd[1990]: time="2026-01-24T00:34:01.547355846Z" level=info msg="RemovePodSandbox for \"8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35\"" Jan 24 00:34:01.547732 containerd[1990]: time="2026-01-24T00:34:01.547397511Z" level=info msg="Forcibly stopping sandbox \"8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35\"" Jan 24 00:34:01.664065 containerd[1990]: 2026-01-24 00:34:01.607 [WARNING][6030] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--136-k8s-coredns--674b8bbfcf--xk4gx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e6c717e9-efbf-49cc-b817-198682317a0f", ResourceVersion:"1147", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-136", ContainerID:"9a378e49ca35967023a744c58e897c8d63744827580cfa7763989d53f7a0f629", Pod:"coredns-674b8bbfcf-xk4gx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26dd29f8fc9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:34:01.664065 containerd[1990]: 2026-01-24 00:34:01.607 [INFO][6030] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" Jan 24 00:34:01.664065 containerd[1990]: 2026-01-24 00:34:01.607 [INFO][6030] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" iface="eth0" netns="" Jan 24 00:34:01.664065 containerd[1990]: 2026-01-24 00:34:01.607 [INFO][6030] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" Jan 24 00:34:01.664065 containerd[1990]: 2026-01-24 00:34:01.607 [INFO][6030] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" Jan 24 00:34:01.664065 containerd[1990]: 2026-01-24 00:34:01.644 [INFO][6038] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" HandleID="k8s-pod-network.8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" Workload="ip--172--31--16--136-k8s-coredns--674b8bbfcf--xk4gx-eth0" Jan 24 00:34:01.664065 containerd[1990]: 2026-01-24 00:34:01.644 [INFO][6038] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:34:01.664065 containerd[1990]: 2026-01-24 00:34:01.644 [INFO][6038] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:34:01.664065 containerd[1990]: 2026-01-24 00:34:01.656 [WARNING][6038] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" HandleID="k8s-pod-network.8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" Workload="ip--172--31--16--136-k8s-coredns--674b8bbfcf--xk4gx-eth0" Jan 24 00:34:01.664065 containerd[1990]: 2026-01-24 00:34:01.656 [INFO][6038] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" HandleID="k8s-pod-network.8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" Workload="ip--172--31--16--136-k8s-coredns--674b8bbfcf--xk4gx-eth0" Jan 24 00:34:01.664065 containerd[1990]: 2026-01-24 00:34:01.658 [INFO][6038] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:34:01.664065 containerd[1990]: 2026-01-24 00:34:01.661 [INFO][6030] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35" Jan 24 00:34:01.666204 containerd[1990]: time="2026-01-24T00:34:01.664804788Z" level=info msg="TearDown network for sandbox \"8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35\" successfully" Jan 24 00:34:01.674176 containerd[1990]: time="2026-01-24T00:34:01.673282804Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:34:01.674176 containerd[1990]: time="2026-01-24T00:34:01.673366936Z" level=info msg="RemovePodSandbox \"8e38418cf6188e7e6a24f7bc0c10b5d6a9345280c7113ce8101b919b322d3c35\" returns successfully" Jan 24 00:34:01.674176 containerd[1990]: time="2026-01-24T00:34:01.673901692Z" level=info msg="StopPodSandbox for \"9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f\"" Jan 24 00:34:01.799451 containerd[1990]: 2026-01-24 00:34:01.741 [WARNING][6052] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--136-k8s-calico--kube--controllers--777f8fb74--d8qgp-eth0", GenerateName:"calico-kube-controllers-777f8fb74-", Namespace:"calico-system", SelfLink:"", UID:"0c7963cb-5f76-453d-b9ca-f28ed3f17ce0", ResourceVersion:"1309", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"777f8fb74", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-136", ContainerID:"6ab6a55cae1e79f10a29a40395711ddb857b9a3d5cc5d8d962bf0f20e00c293c", Pod:"calico-kube-controllers-777f8fb74-d8qgp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.10.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib7619293cb5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:34:01.799451 containerd[1990]: 2026-01-24 00:34:01.742 [INFO][6052] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" Jan 24 00:34:01.799451 containerd[1990]: 2026-01-24 00:34:01.742 [INFO][6052] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" iface="eth0" netns="" Jan 24 00:34:01.799451 containerd[1990]: 2026-01-24 00:34:01.742 [INFO][6052] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" Jan 24 00:34:01.799451 containerd[1990]: 2026-01-24 00:34:01.742 [INFO][6052] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" Jan 24 00:34:01.799451 containerd[1990]: 2026-01-24 00:34:01.778 [INFO][6060] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" HandleID="k8s-pod-network.9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" Workload="ip--172--31--16--136-k8s-calico--kube--controllers--777f8fb74--d8qgp-eth0" Jan 24 00:34:01.799451 containerd[1990]: 2026-01-24 00:34:01.779 [INFO][6060] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:34:01.799451 containerd[1990]: 2026-01-24 00:34:01.779 [INFO][6060] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:34:01.799451 containerd[1990]: 2026-01-24 00:34:01.787 [WARNING][6060] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" HandleID="k8s-pod-network.9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" Workload="ip--172--31--16--136-k8s-calico--kube--controllers--777f8fb74--d8qgp-eth0" Jan 24 00:34:01.799451 containerd[1990]: 2026-01-24 00:34:01.787 [INFO][6060] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" HandleID="k8s-pod-network.9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" Workload="ip--172--31--16--136-k8s-calico--kube--controllers--777f8fb74--d8qgp-eth0" Jan 24 00:34:01.799451 containerd[1990]: 2026-01-24 00:34:01.791 [INFO][6060] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:34:01.799451 containerd[1990]: 2026-01-24 00:34:01.794 [INFO][6052] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" Jan 24 00:34:01.799451 containerd[1990]: time="2026-01-24T00:34:01.798887066Z" level=info msg="TearDown network for sandbox \"9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f\" successfully" Jan 24 00:34:01.799451 containerd[1990]: time="2026-01-24T00:34:01.798917502Z" level=info msg="StopPodSandbox for \"9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f\" returns successfully" Jan 24 00:34:01.804167 containerd[1990]: time="2026-01-24T00:34:01.802033968Z" level=info msg="RemovePodSandbox for \"9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f\"" Jan 24 00:34:01.804167 containerd[1990]: time="2026-01-24T00:34:01.802081385Z" level=info msg="Forcibly stopping sandbox \"9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f\"" Jan 24 00:34:01.944554 systemd[1]: Started sshd@15-172.31.16.136:22-4.153.228.146:43050.service - OpenSSH per-connection server daemon (4.153.228.146:43050). Jan 24 00:34:02.101487 containerd[1990]: 2026-01-24 00:34:01.901 [WARNING][6074] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--136-k8s-calico--kube--controllers--777f8fb74--d8qgp-eth0", GenerateName:"calico-kube-controllers-777f8fb74-", Namespace:"calico-system", SelfLink:"", UID:"0c7963cb-5f76-453d-b9ca-f28ed3f17ce0", ResourceVersion:"1309", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"777f8fb74", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-136", ContainerID:"6ab6a55cae1e79f10a29a40395711ddb857b9a3d5cc5d8d962bf0f20e00c293c", Pod:"calico-kube-controllers-777f8fb74-d8qgp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.10.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib7619293cb5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:34:02.101487 containerd[1990]: 2026-01-24 00:34:01.919 [INFO][6074] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" Jan 24 00:34:02.101487 containerd[1990]: 2026-01-24 00:34:01.919 [INFO][6074] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" iface="eth0" netns="" Jan 24 00:34:02.101487 containerd[1990]: 2026-01-24 00:34:01.919 [INFO][6074] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" Jan 24 00:34:02.101487 containerd[1990]: 2026-01-24 00:34:01.919 [INFO][6074] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" Jan 24 00:34:02.101487 containerd[1990]: 2026-01-24 00:34:02.042 [INFO][6083] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" HandleID="k8s-pod-network.9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" Workload="ip--172--31--16--136-k8s-calico--kube--controllers--777f8fb74--d8qgp-eth0" Jan 24 00:34:02.101487 containerd[1990]: 2026-01-24 00:34:02.042 [INFO][6083] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:34:02.101487 containerd[1990]: 2026-01-24 00:34:02.042 [INFO][6083] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:34:02.101487 containerd[1990]: 2026-01-24 00:34:02.071 [WARNING][6083] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" HandleID="k8s-pod-network.9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" Workload="ip--172--31--16--136-k8s-calico--kube--controllers--777f8fb74--d8qgp-eth0" Jan 24 00:34:02.101487 containerd[1990]: 2026-01-24 00:34:02.072 [INFO][6083] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" HandleID="k8s-pod-network.9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" Workload="ip--172--31--16--136-k8s-calico--kube--controllers--777f8fb74--d8qgp-eth0" Jan 24 00:34:02.101487 containerd[1990]: 2026-01-24 00:34:02.086 [INFO][6083] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:34:02.101487 containerd[1990]: 2026-01-24 00:34:02.098 [INFO][6074] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f" Jan 24 00:34:02.101487 containerd[1990]: time="2026-01-24T00:34:02.101359846Z" level=info msg="TearDown network for sandbox \"9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f\" successfully" Jan 24 00:34:02.126520 containerd[1990]: time="2026-01-24T00:34:02.126462255Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:34:02.126661 containerd[1990]: time="2026-01-24T00:34:02.126551179Z" level=info msg="RemovePodSandbox \"9359da4ee04710c024645d57937058e25cc25967944613a01b17e75ebce74c2f\" returns successfully" Jan 24 00:34:02.127383 containerd[1990]: time="2026-01-24T00:34:02.127356386Z" level=info msg="StopPodSandbox for \"8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d\"" Jan 24 00:34:02.353427 containerd[1990]: 2026-01-24 00:34:02.271 [WARNING][6098] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--136-k8s-calico--apiserver--59955b8999--7njzz-eth0", GenerateName:"calico-apiserver-59955b8999-", Namespace:"calico-apiserver", SelfLink:"", UID:"8bd1b1e2-6c2f-496d-84df-3687f4a4a992", ResourceVersion:"1349", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59955b8999", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-136", ContainerID:"e06dc426c84c8c019b9dd7cc6d1906d0159aa9563c58baf13cbd1bd7c4ef6f5b", Pod:"calico-apiserver-59955b8999-7njzz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibbb1761c22f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:34:02.353427 containerd[1990]: 2026-01-24 00:34:02.272 [INFO][6098] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" Jan 24 00:34:02.353427 containerd[1990]: 2026-01-24 00:34:02.272 [INFO][6098] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" iface="eth0" netns="" Jan 24 00:34:02.353427 containerd[1990]: 2026-01-24 00:34:02.272 [INFO][6098] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" Jan 24 00:34:02.353427 containerd[1990]: 2026-01-24 00:34:02.272 [INFO][6098] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" Jan 24 00:34:02.353427 containerd[1990]: 2026-01-24 00:34:02.336 [INFO][6105] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" HandleID="k8s-pod-network.8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" Workload="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--7njzz-eth0" Jan 24 00:34:02.353427 containerd[1990]: 2026-01-24 00:34:02.337 [INFO][6105] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:34:02.353427 containerd[1990]: 2026-01-24 00:34:02.337 [INFO][6105] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:34:02.353427 containerd[1990]: 2026-01-24 00:34:02.346 [WARNING][6105] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" HandleID="k8s-pod-network.8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" Workload="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--7njzz-eth0" Jan 24 00:34:02.353427 containerd[1990]: 2026-01-24 00:34:02.346 [INFO][6105] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" HandleID="k8s-pod-network.8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" Workload="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--7njzz-eth0" Jan 24 00:34:02.353427 containerd[1990]: 2026-01-24 00:34:02.348 [INFO][6105] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:34:02.353427 containerd[1990]: 2026-01-24 00:34:02.350 [INFO][6098] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" Jan 24 00:34:02.353427 containerd[1990]: time="2026-01-24T00:34:02.353388789Z" level=info msg="TearDown network for sandbox \"8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d\" successfully" Jan 24 00:34:02.353427 containerd[1990]: time="2026-01-24T00:34:02.353421374Z" level=info msg="StopPodSandbox for \"8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d\" returns successfully" Jan 24 00:34:02.355620 containerd[1990]: time="2026-01-24T00:34:02.355567582Z" level=info msg="RemovePodSandbox for \"8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d\"" Jan 24 00:34:02.355746 containerd[1990]: time="2026-01-24T00:34:02.355635916Z" level=info msg="Forcibly stopping sandbox \"8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d\"" Jan 24 00:34:02.502181 containerd[1990]: 2026-01-24 00:34:02.422 [WARNING][6119] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--136-k8s-calico--apiserver--59955b8999--7njzz-eth0", GenerateName:"calico-apiserver-59955b8999-", Namespace:"calico-apiserver", SelfLink:"", UID:"8bd1b1e2-6c2f-496d-84df-3687f4a4a992", ResourceVersion:"1349", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 32, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59955b8999", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-136", ContainerID:"e06dc426c84c8c019b9dd7cc6d1906d0159aa9563c58baf13cbd1bd7c4ef6f5b", Pod:"calico-apiserver-59955b8999-7njzz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibbb1761c22f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:34:02.502181 containerd[1990]: 2026-01-24 00:34:02.423 [INFO][6119] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" Jan 24 00:34:02.502181 containerd[1990]: 2026-01-24 00:34:02.423 [INFO][6119] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" iface="eth0" netns="" Jan 24 00:34:02.502181 containerd[1990]: 2026-01-24 00:34:02.423 [INFO][6119] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" Jan 24 00:34:02.502181 containerd[1990]: 2026-01-24 00:34:02.424 [INFO][6119] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" Jan 24 00:34:02.502181 containerd[1990]: 2026-01-24 00:34:02.474 [INFO][6126] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" HandleID="k8s-pod-network.8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" Workload="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--7njzz-eth0" Jan 24 00:34:02.502181 containerd[1990]: 2026-01-24 00:34:02.476 [INFO][6126] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:34:02.502181 containerd[1990]: 2026-01-24 00:34:02.476 [INFO][6126] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:34:02.502181 containerd[1990]: 2026-01-24 00:34:02.491 [WARNING][6126] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" HandleID="k8s-pod-network.8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" Workload="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--7njzz-eth0" Jan 24 00:34:02.502181 containerd[1990]: 2026-01-24 00:34:02.491 [INFO][6126] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" HandleID="k8s-pod-network.8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" Workload="ip--172--31--16--136-k8s-calico--apiserver--59955b8999--7njzz-eth0" Jan 24 00:34:02.502181 containerd[1990]: 2026-01-24 00:34:02.495 [INFO][6126] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:34:02.502181 containerd[1990]: 2026-01-24 00:34:02.499 [INFO][6119] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d" Jan 24 00:34:02.502181 containerd[1990]: time="2026-01-24T00:34:02.501671393Z" level=info msg="TearDown network for sandbox \"8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d\" successfully" Jan 24 00:34:02.509205 containerd[1990]: time="2026-01-24T00:34:02.509134952Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:34:02.509404 containerd[1990]: time="2026-01-24T00:34:02.509235725Z" level=info msg="RemovePodSandbox \"8041bee3f048b0aec9a68ed80f47f43b8bb3b4da418e80ab51b7575141337e7d\" returns successfully" Jan 24 00:34:02.509889 containerd[1990]: time="2026-01-24T00:34:02.509857425Z" level=info msg="StopPodSandbox for \"bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64\"" Jan 24 00:34:02.589543 sshd[6081]: Accepted publickey for core from 4.153.228.146 port 43050 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:34:02.594480 sshd[6081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:02.607051 systemd-logind[1960]: New session 16 of user core. Jan 24 00:34:02.611385 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 00:34:02.695041 containerd[1990]: 2026-01-24 00:34:02.585 [WARNING][6141] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" WorkloadEndpoint="ip--172--31--16--136-k8s-whisker--7f66bf4696--t77qb-eth0" Jan 24 00:34:02.695041 containerd[1990]: 2026-01-24 00:34:02.586 [INFO][6141] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" Jan 24 00:34:02.695041 containerd[1990]: 2026-01-24 00:34:02.586 [INFO][6141] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" iface="eth0" netns="" Jan 24 00:34:02.695041 containerd[1990]: 2026-01-24 00:34:02.586 [INFO][6141] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" Jan 24 00:34:02.695041 containerd[1990]: 2026-01-24 00:34:02.586 [INFO][6141] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" Jan 24 00:34:02.695041 containerd[1990]: 2026-01-24 00:34:02.658 [INFO][6148] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" HandleID="k8s-pod-network.bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" Workload="ip--172--31--16--136-k8s-whisker--7f66bf4696--t77qb-eth0" Jan 24 00:34:02.695041 containerd[1990]: 2026-01-24 00:34:02.658 [INFO][6148] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:34:02.695041 containerd[1990]: 2026-01-24 00:34:02.658 [INFO][6148] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:34:02.695041 containerd[1990]: 2026-01-24 00:34:02.673 [WARNING][6148] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" HandleID="k8s-pod-network.bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" Workload="ip--172--31--16--136-k8s-whisker--7f66bf4696--t77qb-eth0" Jan 24 00:34:02.695041 containerd[1990]: 2026-01-24 00:34:02.673 [INFO][6148] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" HandleID="k8s-pod-network.bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" Workload="ip--172--31--16--136-k8s-whisker--7f66bf4696--t77qb-eth0" Jan 24 00:34:02.695041 containerd[1990]: 2026-01-24 00:34:02.688 [INFO][6148] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:34:02.695041 containerd[1990]: 2026-01-24 00:34:02.690 [INFO][6141] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" Jan 24 00:34:02.695041 containerd[1990]: time="2026-01-24T00:34:02.694754344Z" level=info msg="TearDown network for sandbox \"bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64\" successfully" Jan 24 00:34:02.695041 containerd[1990]: time="2026-01-24T00:34:02.694810838Z" level=info msg="StopPodSandbox for \"bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64\" returns successfully" Jan 24 00:34:02.697220 containerd[1990]: time="2026-01-24T00:34:02.696657002Z" level=info msg="RemovePodSandbox for \"bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64\"" Jan 24 00:34:02.697220 containerd[1990]: time="2026-01-24T00:34:02.696694762Z" level=info msg="Forcibly stopping sandbox \"bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64\"" Jan 24 00:34:02.822598 containerd[1990]: 2026-01-24 00:34:02.774 [WARNING][6163] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" WorkloadEndpoint="ip--172--31--16--136-k8s-whisker--7f66bf4696--t77qb-eth0" Jan 24 00:34:02.822598 containerd[1990]: 2026-01-24 00:34:02.775 [INFO][6163] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" Jan 24 00:34:02.822598 containerd[1990]: 2026-01-24 00:34:02.775 [INFO][6163] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" iface="eth0" netns="" Jan 24 00:34:02.822598 containerd[1990]: 2026-01-24 00:34:02.775 [INFO][6163] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" Jan 24 00:34:02.822598 containerd[1990]: 2026-01-24 00:34:02.775 [INFO][6163] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" Jan 24 00:34:02.822598 containerd[1990]: 2026-01-24 00:34:02.806 [INFO][6171] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" HandleID="k8s-pod-network.bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" Workload="ip--172--31--16--136-k8s-whisker--7f66bf4696--t77qb-eth0" Jan 24 00:34:02.822598 containerd[1990]: 2026-01-24 00:34:02.806 [INFO][6171] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:34:02.822598 containerd[1990]: 2026-01-24 00:34:02.806 [INFO][6171] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:34:02.822598 containerd[1990]: 2026-01-24 00:34:02.815 [WARNING][6171] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" HandleID="k8s-pod-network.bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" Workload="ip--172--31--16--136-k8s-whisker--7f66bf4696--t77qb-eth0" Jan 24 00:34:02.822598 containerd[1990]: 2026-01-24 00:34:02.816 [INFO][6171] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" HandleID="k8s-pod-network.bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" Workload="ip--172--31--16--136-k8s-whisker--7f66bf4696--t77qb-eth0" Jan 24 00:34:02.822598 containerd[1990]: 2026-01-24 00:34:02.817 [INFO][6171] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:34:02.822598 containerd[1990]: 2026-01-24 00:34:02.820 [INFO][6163] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64" Jan 24 00:34:02.824815 containerd[1990]: time="2026-01-24T00:34:02.822703103Z" level=info msg="TearDown network for sandbox \"bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64\" successfully" Jan 24 00:34:02.832044 containerd[1990]: time="2026-01-24T00:34:02.831963894Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:34:02.832772 containerd[1990]: time="2026-01-24T00:34:02.832264358Z" level=info msg="RemovePodSandbox \"bdc455a92a192930a13fe4ff12d021fa64a7d7686d00381f91a5af116311cc64\" returns successfully" Jan 24 00:34:03.552406 sshd[6081]: pam_unix(sshd:session): session closed for user core Jan 24 00:34:03.556449 systemd-logind[1960]: Session 16 logged out. Waiting for processes to exit. Jan 24 00:34:03.557830 systemd[1]: sshd@15-172.31.16.136:22-4.153.228.146:43050.service: Deactivated successfully. Jan 24 00:34:03.563593 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 00:34:03.566878 systemd-logind[1960]: Removed session 16. Jan 24 00:34:03.644615 systemd[1]: Started sshd@16-172.31.16.136:22-4.153.228.146:43066.service - OpenSSH per-connection server daemon (4.153.228.146:43066). Jan 24 00:34:04.150945 sshd[6187]: Accepted publickey for core from 4.153.228.146 port 43066 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:34:04.151927 sshd[6187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:04.158511 systemd-logind[1960]: New session 17 of user core. Jan 24 00:34:04.165083 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 00:34:08.579044 sshd[6187]: pam_unix(sshd:session): session closed for user core Jan 24 00:34:08.589021 systemd-logind[1960]: Session 17 logged out. Waiting for processes to exit. Jan 24 00:34:08.589185 systemd[1]: sshd@16-172.31.16.136:22-4.153.228.146:43066.service: Deactivated successfully. Jan 24 00:34:08.593910 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 00:34:08.597440 systemd-logind[1960]: Removed session 17. Jan 24 00:34:08.606101 containerd[1990]: time="2026-01-24T00:34:08.605802877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:34:08.665477 systemd[1]: Started sshd@17-172.31.16.136:22-4.153.228.146:43102.service - OpenSSH per-connection server daemon (4.153.228.146:43102). Jan 24 00:34:08.897756 containerd[1990]: time="2026-01-24T00:34:08.897621648Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:34:08.899842 containerd[1990]: time="2026-01-24T00:34:08.899787131Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:34:08.899948 containerd[1990]: time="2026-01-24T00:34:08.899876993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:34:08.900122 kubelet[3194]: E0124 00:34:08.900086 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:34:08.900469 kubelet[3194]: E0124 00:34:08.900144 3194 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:34:08.900469 kubelet[3194]: E0124 00:34:08.900360 3194 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bs2mk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-t69bh_calico-system(2953039e-0e7f-4027-9a3c-137a03fa2153): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:34:08.901074 containerd[1990]: time="2026-01-24T00:34:08.901053294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:34:08.902104 kubelet[3194]: E0124 00:34:08.902051 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t69bh" podUID="2953039e-0e7f-4027-9a3c-137a03fa2153" Jan 24 00:34:09.128952 containerd[1990]: time="2026-01-24T00:34:09.128720992Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:34:09.130937 containerd[1990]: time="2026-01-24T00:34:09.130755343Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:34:09.130937 containerd[1990]: time="2026-01-24T00:34:09.130868145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:34:09.131097 kubelet[3194]: E0124 00:34:09.131055 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:34:09.131173 kubelet[3194]: E0124 00:34:09.131112 3194 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:34:09.132974 kubelet[3194]: E0124 00:34:09.131299 3194 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jr4vw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-777f8fb74-d8qgp_calico-system(0c7963cb-5f76-453d-b9ca-f28ed3f17ce0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:34:09.132974 kubelet[3194]: E0124 00:34:09.132814 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-777f8fb74-d8qgp" podUID="0c7963cb-5f76-453d-b9ca-f28ed3f17ce0" Jan 24 00:34:09.202589 sshd[6198]: Accepted publickey for core from 4.153.228.146 port 43102 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:34:09.204203 sshd[6198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:09.214885 systemd-logind[1960]: New session 18 of user core. Jan 24 00:34:09.221359 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 00:34:09.611329 containerd[1990]: time="2026-01-24T00:34:09.610545310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:34:09.625051 update_engine[1961]: I20260124 00:34:09.624258 1961 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:34:09.625051 update_engine[1961]: I20260124 00:34:09.624517 1961 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:34:09.626834 update_engine[1961]: I20260124 00:34:09.626000 1961 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:34:09.626834 update_engine[1961]: E20260124 00:34:09.626472 1961 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 24 00:34:09.626834 update_engine[1961]: I20260124 00:34:09.626538 1961 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 24 00:34:09.633514 update_engine[1961]: I20260124 00:34:09.632993 1961 omaha_request_action.cc:617] Omaha request response: Jan 24 00:34:09.633514 update_engine[1961]: E20260124 00:34:09.633131 1961 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 24 00:34:09.633514 update_engine[1961]: I20260124 00:34:09.633201 1961 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 24 00:34:09.633514 update_engine[1961]: I20260124 00:34:09.633210 1961 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 24 00:34:09.633514 update_engine[1961]: I20260124 00:34:09.633219 1961 update_attempter.cc:306] Processing Done. Jan 24 00:34:09.633514 update_engine[1961]: E20260124 00:34:09.633247 1961 update_attempter.cc:619] Update failed. Jan 24 00:34:09.633514 update_engine[1961]: I20260124 00:34:09.633257 1961 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 24 00:34:09.633514 update_engine[1961]: I20260124 00:34:09.633265 1961 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 24 00:34:09.633514 update_engine[1961]: I20260124 00:34:09.633275 1961 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 24 00:34:09.635059 update_engine[1961]: I20260124 00:34:09.634501 1961 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 24 00:34:09.635059 update_engine[1961]: I20260124 00:34:09.634555 1961 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 24 00:34:09.635059 update_engine[1961]: I20260124 00:34:09.634567 1961 omaha_request_action.cc:272] Request: Jan 24 00:34:09.635059 update_engine[1961]: Jan 24 00:34:09.635059 update_engine[1961]: Jan 24 00:34:09.635059 update_engine[1961]: Jan 24 00:34:09.635059 update_engine[1961]: Jan 24 00:34:09.635059 update_engine[1961]: Jan 24 00:34:09.635059 update_engine[1961]: Jan 24 00:34:09.635059 update_engine[1961]: I20260124 00:34:09.634575 1961 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 24 00:34:09.635059 update_engine[1961]: I20260124 00:34:09.634778 1961 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 24 00:34:09.635059 update_engine[1961]: I20260124 00:34:09.634995 1961 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 24 00:34:09.639880 update_engine[1961]: E20260124 00:34:09.635435 1961 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 24 00:34:09.639880 update_engine[1961]: I20260124 00:34:09.635492 1961 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 24 00:34:09.639880 update_engine[1961]: I20260124 00:34:09.635504 1961 omaha_request_action.cc:617] Omaha request response: Jan 24 00:34:09.639880 update_engine[1961]: I20260124 00:34:09.635514 1961 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 24 00:34:09.639880 update_engine[1961]: I20260124 00:34:09.635521 1961 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 24 00:34:09.639880 update_engine[1961]: I20260124 00:34:09.635529 1961 update_attempter.cc:306] Processing Done. Jan 24 00:34:09.639880 update_engine[1961]: I20260124 00:34:09.635538 1961 update_attempter.cc:310] Error event sent. Jan 24 00:34:09.641208 update_engine[1961]: I20260124 00:34:09.640333 1961 update_check_scheduler.cc:74] Next update check in 44m16s Jan 24 00:34:09.643916 locksmithd[1993]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 24 00:34:09.643916 locksmithd[1993]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 24 00:34:09.866300 containerd[1990]: time="2026-01-24T00:34:09.866170103Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:34:09.868486 containerd[1990]: time="2026-01-24T00:34:09.868432419Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:34:09.868619 containerd[1990]: time="2026-01-24T00:34:09.868533647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:34:09.868758 kubelet[3194]: E0124 00:34:09.868718 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:34:09.868943 kubelet[3194]: E0124 00:34:09.868769 3194 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:34:09.869069 kubelet[3194]: E0124 00:34:09.869026 3194 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e19c64a1732d4616946f432813c0113d,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q86vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74b94c78c8-p5fzt_calico-system(d98daf60-e1b2-4bcf-bf77-7fe1f3510929): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:34:09.869715 containerd[1990]: time="2026-01-24T00:34:09.869675652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:34:10.142178 containerd[1990]: time="2026-01-24T00:34:10.141927098Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:34:10.144189 containerd[1990]: time="2026-01-24T00:34:10.143998754Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:34:10.144189 containerd[1990]: time="2026-01-24T00:34:10.144095347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:34:10.146161 kubelet[3194]: E0124 00:34:10.144515 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:34:10.146161 kubelet[3194]: E0124 00:34:10.144579 3194 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:34:10.146161 kubelet[3194]: E0124 00:34:10.145861 3194 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zr42q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59955b8999-49mgw_calico-apiserver(44356a3b-6e7e-4852-a5bd-fffe6e033ca3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:34:10.146748 containerd[1990]: time="2026-01-24T00:34:10.145098431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:34:10.147886 kubelet[3194]: E0124 00:34:10.147846 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59955b8999-49mgw" podUID="44356a3b-6e7e-4852-a5bd-fffe6e033ca3" Jan 24 00:34:10.436319 containerd[1990]: time="2026-01-24T00:34:10.436273758Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:34:10.441333 containerd[1990]: time="2026-01-24T00:34:10.441267595Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:34:10.441474 containerd[1990]: time="2026-01-24T00:34:10.441378760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:34:10.441619 kubelet[3194]: E0124 00:34:10.441584 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:34:10.441666 kubelet[3194]: E0124 00:34:10.441633 3194 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:34:10.441782 kubelet[3194]: E0124 00:34:10.441747 3194 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q86vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74b94c78c8-p5fzt_calico-system(d98daf60-e1b2-4bcf-bf77-7fe1f3510929): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:34:10.452989 kubelet[3194]: E0124 00:34:10.452926 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74b94c78c8-p5fzt" podUID="d98daf60-e1b2-4bcf-bf77-7fe1f3510929" Jan 24 00:34:10.745441 sshd[6198]: pam_unix(sshd:session): session closed for user core Jan 24 00:34:10.758659 systemd-logind[1960]: Session 18 logged out. Waiting for processes to exit. Jan 24 00:34:10.759731 systemd[1]: sshd@17-172.31.16.136:22-4.153.228.146:43102.service: Deactivated successfully. Jan 24 00:34:10.765086 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 00:34:10.770678 systemd-logind[1960]: Removed session 18. Jan 24 00:34:10.844993 systemd[1]: Started sshd@18-172.31.16.136:22-4.153.228.146:43114.service - OpenSSH per-connection server daemon (4.153.228.146:43114). Jan 24 00:34:11.353991 sshd[6224]: Accepted publickey for core from 4.153.228.146 port 43114 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:34:11.355372 sshd[6224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:11.361899 systemd-logind[1960]: New session 19 of user core. Jan 24 00:34:11.371371 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 00:34:11.615271 containerd[1990]: time="2026-01-24T00:34:11.614853851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:34:11.921205 containerd[1990]: time="2026-01-24T00:34:11.921156252Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:34:11.925720 containerd[1990]: time="2026-01-24T00:34:11.925510108Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:34:11.925720 containerd[1990]: time="2026-01-24T00:34:11.925614146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:34:11.927748 kubelet[3194]: E0124 00:34:11.927708 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:34:11.928545 kubelet[3194]: E0124 00:34:11.928361 3194 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:34:11.929239 kubelet[3194]: E0124 00:34:11.929131 3194 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8bzv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zgckl_calico-system(6dea86f8-2783-4942-8476-4f769af7b22d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:34:11.932920 containerd[1990]: time="2026-01-24T00:34:11.932692931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:34:12.223727 containerd[1990]: time="2026-01-24T00:34:12.222801224Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:34:12.230585 containerd[1990]: time="2026-01-24T00:34:12.230435196Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:34:12.230585 containerd[1990]: time="2026-01-24T00:34:12.230476489Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:34:12.230771 kubelet[3194]: E0124 00:34:12.230669 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:34:12.230771 kubelet[3194]: E0124 00:34:12.230716 3194 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:34:12.232527 kubelet[3194]: E0124 00:34:12.230989 3194 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j7djh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59955b8999-7njzz_calico-apiserver(8bd1b1e2-6c2f-496d-84df-3687f4a4a992): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:34:12.232748 containerd[1990]: time="2026-01-24T00:34:12.232266431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:34:12.232839 kubelet[3194]: E0124 00:34:12.232560 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59955b8999-7njzz" podUID="8bd1b1e2-6c2f-496d-84df-3687f4a4a992" Jan 24 00:34:12.489613 containerd[1990]: time="2026-01-24T00:34:12.489471326Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:34:12.491784 containerd[1990]: time="2026-01-24T00:34:12.491700290Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:34:12.491912 containerd[1990]: time="2026-01-24T00:34:12.491792982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:34:12.493330 kubelet[3194]: E0124 00:34:12.493283 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:34:12.493443 kubelet[3194]: E0124 00:34:12.493345 3194 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:34:12.520470 kubelet[3194]: E0124 00:34:12.520392 3194 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8bzv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zgckl_calico-system(6dea86f8-2783-4942-8476-4f769af7b22d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:34:12.525013 kubelet[3194]: E0124 00:34:12.522258 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zgckl" podUID="6dea86f8-2783-4942-8476-4f769af7b22d" Jan 24 00:34:12.726429 sshd[6224]: pam_unix(sshd:session): session closed for user core Jan 24 00:34:12.730944 systemd[1]: sshd@18-172.31.16.136:22-4.153.228.146:43114.service: Deactivated successfully. Jan 24 00:34:12.736793 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 00:34:12.741809 systemd-logind[1960]: Session 19 logged out. Waiting for processes to exit. Jan 24 00:34:12.745258 systemd-logind[1960]: Removed session 19. Jan 24 00:34:12.820495 systemd[1]: Started sshd@19-172.31.16.136:22-4.153.228.146:43122.service - OpenSSH per-connection server daemon (4.153.228.146:43122). Jan 24 00:34:13.340507 sshd[6235]: Accepted publickey for core from 4.153.228.146 port 43122 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:34:13.341844 sshd[6235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:13.347570 systemd-logind[1960]: New session 20 of user core. Jan 24 00:34:13.353374 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 24 00:34:14.211443 sshd[6235]: pam_unix(sshd:session): session closed for user core Jan 24 00:34:14.217730 systemd[1]: sshd@19-172.31.16.136:22-4.153.228.146:43122.service: Deactivated successfully. Jan 24 00:34:14.218062 systemd-logind[1960]: Session 20 logged out. Waiting for processes to exit. Jan 24 00:34:14.222966 systemd[1]: session-20.scope: Deactivated successfully. Jan 24 00:34:14.227980 systemd-logind[1960]: Removed session 20. Jan 24 00:34:19.304379 systemd[1]: Started sshd@20-172.31.16.136:22-4.153.228.146:51668.service - OpenSSH per-connection server daemon (4.153.228.146:51668). Jan 24 00:34:19.837734 sshd[6271]: Accepted publickey for core from 4.153.228.146 port 51668 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:34:19.840767 sshd[6271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:19.853338 systemd-logind[1960]: New session 21 of user core. Jan 24 00:34:19.859190 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 24 00:34:20.597419 sshd[6271]: pam_unix(sshd:session): session closed for user core Jan 24 00:34:20.605458 systemd[1]: sshd@20-172.31.16.136:22-4.153.228.146:51668.service: Deactivated successfully. Jan 24 00:34:20.609903 kubelet[3194]: E0124 00:34:20.609843 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-777f8fb74-d8qgp" podUID="0c7963cb-5f76-453d-b9ca-f28ed3f17ce0" Jan 24 00:34:20.610834 systemd[1]: session-21.scope: Deactivated successfully. Jan 24 00:34:20.612598 systemd-logind[1960]: Session 21 logged out. Waiting for processes to exit. Jan 24 00:34:20.615647 systemd-logind[1960]: Removed session 21. Jan 24 00:34:21.613299 kubelet[3194]: E0124 00:34:21.613242 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t69bh" podUID="2953039e-0e7f-4027-9a3c-137a03fa2153" Jan 24 00:34:23.608275 kubelet[3194]: E0124 00:34:23.608109 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59955b8999-49mgw" podUID="44356a3b-6e7e-4852-a5bd-fffe6e033ca3" Jan 24 00:34:23.610829 kubelet[3194]: E0124 00:34:23.610673 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74b94c78c8-p5fzt" podUID="d98daf60-e1b2-4bcf-bf77-7fe1f3510929" Jan 24 00:34:25.610631 kubelet[3194]: E0124 00:34:25.610541 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zgckl" podUID="6dea86f8-2783-4942-8476-4f769af7b22d" Jan 24 00:34:25.683670 systemd[1]: Started sshd@21-172.31.16.136:22-4.153.228.146:49514.service - OpenSSH per-connection server daemon (4.153.228.146:49514). Jan 24 00:34:26.183360 sshd[6285]: Accepted publickey for core from 4.153.228.146 port 49514 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:34:26.186387 sshd[6285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:26.191414 systemd-logind[1960]: New session 22 of user core. Jan 24 00:34:26.198525 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 24 00:34:26.607448 kubelet[3194]: E0124 00:34:26.605723 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59955b8999-7njzz" podUID="8bd1b1e2-6c2f-496d-84df-3687f4a4a992" Jan 24 00:34:26.646736 sshd[6285]: pam_unix(sshd:session): session closed for user core Jan 24 00:34:26.659877 systemd[1]: sshd@21-172.31.16.136:22-4.153.228.146:49514.service: Deactivated successfully. Jan 24 00:34:26.660327 systemd-logind[1960]: Session 22 logged out. Waiting for processes to exit. Jan 24 00:34:26.666896 systemd[1]: session-22.scope: Deactivated successfully. Jan 24 00:34:26.670987 systemd-logind[1960]: Removed session 22. Jan 24 00:34:31.737240 systemd[1]: Started sshd@22-172.31.16.136:22-4.153.228.146:49528.service - OpenSSH per-connection server daemon (4.153.228.146:49528). Jan 24 00:34:32.220927 sshd[6301]: Accepted publickey for core from 4.153.228.146 port 49528 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:34:32.222417 sshd[6301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:32.227171 systemd-logind[1960]: New session 23 of user core. Jan 24 00:34:32.230325 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 24 00:34:32.696453 sshd[6301]: pam_unix(sshd:session): session closed for user core Jan 24 00:34:32.699566 systemd-logind[1960]: Session 23 logged out. Waiting for processes to exit. Jan 24 00:34:32.699684 systemd[1]: sshd@22-172.31.16.136:22-4.153.228.146:49528.service: Deactivated successfully. Jan 24 00:34:32.701716 systemd[1]: session-23.scope: Deactivated successfully. Jan 24 00:34:32.704606 systemd-logind[1960]: Removed session 23. Jan 24 00:34:33.606485 kubelet[3194]: E0124 00:34:33.606404 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t69bh" podUID="2953039e-0e7f-4027-9a3c-137a03fa2153" Jan 24 00:34:34.604576 kubelet[3194]: E0124 00:34:34.603729 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-777f8fb74-d8qgp" podUID="0c7963cb-5f76-453d-b9ca-f28ed3f17ce0" Jan 24 00:34:36.605230 kubelet[3194]: E0124 00:34:36.604773 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59955b8999-49mgw" podUID="44356a3b-6e7e-4852-a5bd-fffe6e033ca3" Jan 24 00:34:36.626091 kubelet[3194]: E0124 00:34:36.625626 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74b94c78c8-p5fzt" podUID="d98daf60-e1b2-4bcf-bf77-7fe1f3510929" Jan 24 00:34:37.792977 systemd[1]: Started sshd@23-172.31.16.136:22-4.153.228.146:54888.service - OpenSSH per-connection server daemon (4.153.228.146:54888). Jan 24 00:34:38.301578 sshd[6314]: Accepted publickey for core from 4.153.228.146 port 54888 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:34:38.303844 sshd[6314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:38.310004 systemd-logind[1960]: New session 24 of user core. Jan 24 00:34:38.318396 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 24 00:34:38.611448 kubelet[3194]: E0124 00:34:38.611210 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59955b8999-7njzz" podUID="8bd1b1e2-6c2f-496d-84df-3687f4a4a992" Jan 24 00:34:38.618260 kubelet[3194]: E0124 00:34:38.616287 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zgckl" podUID="6dea86f8-2783-4942-8476-4f769af7b22d" Jan 24 00:34:38.900014 sshd[6314]: pam_unix(sshd:session): session closed for user core Jan 24 00:34:38.906760 systemd[1]: sshd@23-172.31.16.136:22-4.153.228.146:54888.service: Deactivated successfully. Jan 24 00:34:38.910895 systemd[1]: session-24.scope: Deactivated successfully. Jan 24 00:34:38.912868 systemd-logind[1960]: Session 24 logged out. Waiting for processes to exit. Jan 24 00:34:38.914307 systemd-logind[1960]: Removed session 24. Jan 24 00:34:43.990560 systemd[1]: Started sshd@24-172.31.16.136:22-4.153.228.146:54892.service - OpenSSH per-connection server daemon (4.153.228.146:54892). Jan 24 00:34:44.531656 sshd[6335]: Accepted publickey for core from 4.153.228.146 port 54892 ssh2: RSA SHA256:AB5yEUNjI0c4eTJKXs1/JdwdYMHfwCUf7HtUTiqLxAY Jan 24 00:34:44.533819 sshd[6335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:34:44.543974 systemd-logind[1960]: New session 25 of user core. Jan 24 00:34:44.550115 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 24 00:34:45.067497 systemd[1]: run-containerd-runc-k8s.io-9e76c446f2a60ec59fcc7cb3dd91f46a552d87fbdf03e4641141e50bcd9737d4-runc.ixewGn.mount: Deactivated successfully. Jan 24 00:34:45.617240 sshd[6335]: pam_unix(sshd:session): session closed for user core Jan 24 00:34:45.630946 systemd[1]: sshd@24-172.31.16.136:22-4.153.228.146:54892.service: Deactivated successfully. Jan 24 00:34:45.637514 systemd[1]: session-25.scope: Deactivated successfully. Jan 24 00:34:45.647928 systemd-logind[1960]: Session 25 logged out. Waiting for processes to exit. Jan 24 00:34:45.651827 systemd-logind[1960]: Removed session 25. Jan 24 00:34:46.620255 kubelet[3194]: E0124 00:34:46.620192 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t69bh" podUID="2953039e-0e7f-4027-9a3c-137a03fa2153" Jan 24 00:34:47.606195 kubelet[3194]: E0124 00:34:47.605422 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-777f8fb74-d8qgp" podUID="0c7963cb-5f76-453d-b9ca-f28ed3f17ce0" Jan 24 00:34:48.604362 kubelet[3194]: E0124 00:34:48.604323 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59955b8999-49mgw" podUID="44356a3b-6e7e-4852-a5bd-fffe6e033ca3" Jan 24 00:34:50.619241 containerd[1990]: time="2026-01-24T00:34:50.612065440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:34:50.959743 containerd[1990]: time="2026-01-24T00:34:50.959679687Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:34:50.961931 containerd[1990]: time="2026-01-24T00:34:50.961864921Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:34:50.962067 containerd[1990]: time="2026-01-24T00:34:50.961953257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:34:50.962338 kubelet[3194]: E0124 00:34:50.962295 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:34:50.962779 kubelet[3194]: E0124 00:34:50.962344 3194 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:34:50.962779 kubelet[3194]: E0124 00:34:50.962468 3194 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e19c64a1732d4616946f432813c0113d,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q86vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74b94c78c8-p5fzt_calico-system(d98daf60-e1b2-4bcf-bf77-7fe1f3510929): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:34:50.964485 containerd[1990]: time="2026-01-24T00:34:50.964458952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:34:51.260400 containerd[1990]: time="2026-01-24T00:34:51.260264855Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:34:51.262370 containerd[1990]: time="2026-01-24T00:34:51.262295254Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:34:51.262561 containerd[1990]: time="2026-01-24T00:34:51.262339137Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:34:51.262692 kubelet[3194]: E0124 00:34:51.262603 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:34:51.262692 kubelet[3194]: E0124 00:34:51.262661 3194 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:34:51.262878 kubelet[3194]: E0124 00:34:51.262818 3194 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q86vj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74b94c78c8-p5fzt_calico-system(d98daf60-e1b2-4bcf-bf77-7fe1f3510929): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:34:51.264054 kubelet[3194]: E0124 00:34:51.264005 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74b94c78c8-p5fzt" podUID="d98daf60-e1b2-4bcf-bf77-7fe1f3510929" Jan 24 00:34:52.612528 containerd[1990]: time="2026-01-24T00:34:52.612266830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:34:52.908516 containerd[1990]: time="2026-01-24T00:34:52.908370649Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:34:52.910485 containerd[1990]: time="2026-01-24T00:34:52.910433839Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:34:52.910635 containerd[1990]: time="2026-01-24T00:34:52.910539477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:34:52.910853 kubelet[3194]: E0124 00:34:52.910812 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:34:52.911200 kubelet[3194]: E0124 00:34:52.910863 3194 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:34:52.911200 kubelet[3194]: E0124 00:34:52.910983 3194 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8bzv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zgckl_calico-system(6dea86f8-2783-4942-8476-4f769af7b22d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:34:52.913101 containerd[1990]: time="2026-01-24T00:34:52.913028881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:34:53.173093 containerd[1990]: time="2026-01-24T00:34:53.173038559Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:34:53.175328 containerd[1990]: time="2026-01-24T00:34:53.175268235Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:34:53.175551 containerd[1990]: time="2026-01-24T00:34:53.175308477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:34:53.175613 kubelet[3194]: E0124 00:34:53.175512 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:34:53.175613 kubelet[3194]: E0124 00:34:53.175555 3194 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:34:53.175738 kubelet[3194]: E0124 00:34:53.175690 3194 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8bzv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zgckl_calico-system(6dea86f8-2783-4942-8476-4f769af7b22d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:34:53.177066 kubelet[3194]: E0124 00:34:53.176996 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zgckl" podUID="6dea86f8-2783-4942-8476-4f769af7b22d" Jan 24 00:34:53.605551 containerd[1990]: time="2026-01-24T00:34:53.605134709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:34:53.873783 containerd[1990]: time="2026-01-24T00:34:53.873638535Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:34:53.875892 containerd[1990]: time="2026-01-24T00:34:53.875741473Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:34:53.875892 containerd[1990]: time="2026-01-24T00:34:53.875786050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:34:53.876405 kubelet[3194]: E0124 00:34:53.876064 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:34:53.876405 kubelet[3194]: E0124 00:34:53.876135 3194 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:34:53.876951 kubelet[3194]: E0124 00:34:53.876497 3194 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j7djh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59955b8999-7njzz_calico-apiserver(8bd1b1e2-6c2f-496d-84df-3687f4a4a992): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:34:53.878159 kubelet[3194]: E0124 00:34:53.878104 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59955b8999-7njzz" podUID="8bd1b1e2-6c2f-496d-84df-3687f4a4a992" Jan 24 00:34:59.254034 systemd[1]: cri-containerd-f25191915d5145d0a6ae4c3b8ba6dc8fde0e300f8602821097df217948c19efa.scope: Deactivated successfully. Jan 24 00:34:59.254864 systemd[1]: cri-containerd-f25191915d5145d0a6ae4c3b8ba6dc8fde0e300f8602821097df217948c19efa.scope: Consumed 16.731s CPU time. Jan 24 00:34:59.331066 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f25191915d5145d0a6ae4c3b8ba6dc8fde0e300f8602821097df217948c19efa-rootfs.mount: Deactivated successfully. Jan 24 00:34:59.374967 containerd[1990]: time="2026-01-24T00:34:59.364481632Z" level=info msg="shim disconnected" id=f25191915d5145d0a6ae4c3b8ba6dc8fde0e300f8602821097df217948c19efa namespace=k8s.io Jan 24 00:34:59.389025 containerd[1990]: time="2026-01-24T00:34:59.388793702Z" level=warning msg="cleaning up after shim disconnected" id=f25191915d5145d0a6ae4c3b8ba6dc8fde0e300f8602821097df217948c19efa namespace=k8s.io Jan 24 00:34:59.389025 containerd[1990]: time="2026-01-24T00:34:59.388848239Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:34:59.521662 systemd[1]: cri-containerd-f50649f4e6a8d40f179847794a86f29cb80c3b6ad724e11030ac48f6181f76af.scope: Deactivated successfully. Jan 24 00:34:59.522862 systemd[1]: cri-containerd-f50649f4e6a8d40f179847794a86f29cb80c3b6ad724e11030ac48f6181f76af.scope: Consumed 3.957s CPU time, 36.0M memory peak, 0B memory swap peak. Jan 24 00:34:59.549349 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f50649f4e6a8d40f179847794a86f29cb80c3b6ad724e11030ac48f6181f76af-rootfs.mount: Deactivated successfully. Jan 24 00:34:59.561399 containerd[1990]: time="2026-01-24T00:34:59.561341052Z" level=info msg="shim disconnected" id=f50649f4e6a8d40f179847794a86f29cb80c3b6ad724e11030ac48f6181f76af namespace=k8s.io Jan 24 00:34:59.561399 containerd[1990]: time="2026-01-24T00:34:59.561387208Z" level=warning msg="cleaning up after shim disconnected" id=f50649f4e6a8d40f179847794a86f29cb80c3b6ad724e11030ac48f6181f76af namespace=k8s.io Jan 24 00:34:59.561399 containerd[1990]: time="2026-01-24T00:34:59.561396872Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:34:59.606835 containerd[1990]: time="2026-01-24T00:34:59.606775908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:34:59.716116 kubelet[3194]: I0124 00:34:59.715845 3194 scope.go:117] "RemoveContainer" containerID="f25191915d5145d0a6ae4c3b8ba6dc8fde0e300f8602821097df217948c19efa" Jan 24 00:34:59.728885 kubelet[3194]: I0124 00:34:59.728554 3194 scope.go:117] "RemoveContainer" containerID="f50649f4e6a8d40f179847794a86f29cb80c3b6ad724e11030ac48f6181f76af" Jan 24 00:34:59.752612 containerd[1990]: time="2026-01-24T00:34:59.752562136Z" level=info msg="CreateContainer within sandbox \"b52fe91fd6fc3e1b49b4a161be02a41a8888794f620a4f9e051aa2d8e0c4e5dc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 24 00:34:59.752821 containerd[1990]: time="2026-01-24T00:34:59.752761235Z" level=info msg="CreateContainer within sandbox \"0e4fbb42b28bbabcb90659f4dfa445aa3ed10dbb3cbd1fcba670080e43788f14\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 24 00:34:59.834006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3967739501.mount: Deactivated successfully. Jan 24 00:34:59.850747 containerd[1990]: time="2026-01-24T00:34:59.850693271Z" level=info msg="CreateContainer within sandbox \"b52fe91fd6fc3e1b49b4a161be02a41a8888794f620a4f9e051aa2d8e0c4e5dc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"12c2f0b34b793129d14e60cd94250bcbcaeda91ec61e4f2bd6923362c831beb1\"" Jan 24 00:34:59.851668 containerd[1990]: time="2026-01-24T00:34:59.851621171Z" level=info msg="StartContainer for \"12c2f0b34b793129d14e60cd94250bcbcaeda91ec61e4f2bd6923362c831beb1\"" Jan 24 00:34:59.857798 containerd[1990]: time="2026-01-24T00:34:59.857741700Z" level=info msg="CreateContainer within sandbox \"0e4fbb42b28bbabcb90659f4dfa445aa3ed10dbb3cbd1fcba670080e43788f14\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"451191ee6b84085ce980af54575816f8843e641bb21d33d01fa671767334f61f\"" Jan 24 00:34:59.858302 containerd[1990]: time="2026-01-24T00:34:59.858268078Z" level=info msg="StartContainer for \"451191ee6b84085ce980af54575816f8843e641bb21d33d01fa671767334f61f\"" Jan 24 00:34:59.897779 systemd[1]: Started cri-containerd-451191ee6b84085ce980af54575816f8843e641bb21d33d01fa671767334f61f.scope - libcontainer container 451191ee6b84085ce980af54575816f8843e641bb21d33d01fa671767334f61f. Jan 24 00:34:59.924415 containerd[1990]: time="2026-01-24T00:34:59.924353907Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:34:59.930186 containerd[1990]: time="2026-01-24T00:34:59.928580948Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:34:59.930186 containerd[1990]: time="2026-01-24T00:34:59.928796825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:34:59.930394 kubelet[3194]: E0124 00:34:59.930102 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:34:59.931940 kubelet[3194]: E0124 00:34:59.930541 3194 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:34:59.931940 kubelet[3194]: E0124 00:34:59.930733 3194 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jr4vw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-777f8fb74-d8qgp_calico-system(0c7963cb-5f76-453d-b9ca-f28ed3f17ce0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:34:59.934915 kubelet[3194]: E0124 00:34:59.934420 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-777f8fb74-d8qgp" podUID="0c7963cb-5f76-453d-b9ca-f28ed3f17ce0" Jan 24 00:34:59.944251 systemd[1]: Started cri-containerd-12c2f0b34b793129d14e60cd94250bcbcaeda91ec61e4f2bd6923362c831beb1.scope - libcontainer container 12c2f0b34b793129d14e60cd94250bcbcaeda91ec61e4f2bd6923362c831beb1. Jan 24 00:35:00.015207 containerd[1990]: time="2026-01-24T00:35:00.015118811Z" level=info msg="StartContainer for \"451191ee6b84085ce980af54575816f8843e641bb21d33d01fa671767334f61f\" returns successfully" Jan 24 00:35:00.044404 containerd[1990]: time="2026-01-24T00:35:00.043602246Z" level=info msg="StartContainer for \"12c2f0b34b793129d14e60cd94250bcbcaeda91ec61e4f2bd6923362c831beb1\" returns successfully" Jan 24 00:35:00.348341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount287314066.mount: Deactivated successfully. Jan 24 00:35:00.604860 containerd[1990]: time="2026-01-24T00:35:00.604355713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:35:00.960245 containerd[1990]: time="2026-01-24T00:35:00.960171870Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:35:00.962462 containerd[1990]: time="2026-01-24T00:35:00.962303414Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:35:00.962462 containerd[1990]: time="2026-01-24T00:35:00.962337235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:35:00.963074 kubelet[3194]: E0124 00:35:00.962548 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:35:00.963074 kubelet[3194]: E0124 00:35:00.962792 3194 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:35:00.963074 kubelet[3194]: E0124 00:35:00.962987 3194 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bs2mk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-t69bh_calico-system(2953039e-0e7f-4027-9a3c-137a03fa2153): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:35:00.964292 kubelet[3194]: E0124 00:35:00.964228 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-t69bh" podUID="2953039e-0e7f-4027-9a3c-137a03fa2153" Jan 24 00:35:01.605792 containerd[1990]: time="2026-01-24T00:35:01.605313843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:35:01.943017 containerd[1990]: time="2026-01-24T00:35:01.942938168Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:35:01.949821 containerd[1990]: time="2026-01-24T00:35:01.949284740Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:35:01.949821 containerd[1990]: time="2026-01-24T00:35:01.949344809Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:35:01.950546 kubelet[3194]: E0124 00:35:01.950241 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:35:01.950546 kubelet[3194]: E0124 00:35:01.950294 3194 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:35:01.951169 kubelet[3194]: E0124 00:35:01.950501 3194 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zr42q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-59955b8999-49mgw_calico-apiserver(44356a3b-6e7e-4852-a5bd-fffe6e033ca3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:35:01.962271 kubelet[3194]: E0124 00:35:01.962181 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59955b8999-49mgw" podUID="44356a3b-6e7e-4852-a5bd-fffe6e033ca3" Jan 24 00:35:03.577394 kubelet[3194]: E0124 00:35:03.577245 3194 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-136?timeout=10s\": context deadline exceeded" Jan 24 00:35:03.605635 kubelet[3194]: E0124 00:35:03.605584 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zgckl" podUID="6dea86f8-2783-4942-8476-4f769af7b22d" Jan 24 00:35:03.613417 kubelet[3194]: E0124 00:35:03.613154 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74b94c78c8-p5fzt" podUID="d98daf60-e1b2-4bcf-bf77-7fe1f3510929" Jan 24 00:35:05.870480 systemd[1]: cri-containerd-c479483f183ae789c3cd336fae89b6744da0461ceb544ec2e7ab609c63079d04.scope: Deactivated successfully. Jan 24 00:35:05.871781 systemd[1]: cri-containerd-c479483f183ae789c3cd336fae89b6744da0461ceb544ec2e7ab609c63079d04.scope: Consumed 2.659s CPU time, 15.6M memory peak, 0B memory swap peak. Jan 24 00:35:05.896402 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c479483f183ae789c3cd336fae89b6744da0461ceb544ec2e7ab609c63079d04-rootfs.mount: Deactivated successfully. Jan 24 00:35:05.930212 containerd[1990]: time="2026-01-24T00:35:05.930114038Z" level=info msg="shim disconnected" id=c479483f183ae789c3cd336fae89b6744da0461ceb544ec2e7ab609c63079d04 namespace=k8s.io Jan 24 00:35:05.930212 containerd[1990]: time="2026-01-24T00:35:05.930193980Z" level=warning msg="cleaning up after shim disconnected" id=c479483f183ae789c3cd336fae89b6744da0461ceb544ec2e7ab609c63079d04 namespace=k8s.io Jan 24 00:35:05.930212 containerd[1990]: time="2026-01-24T00:35:05.930207523Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:35:06.760608 kubelet[3194]: I0124 00:35:06.760320 3194 scope.go:117] "RemoveContainer" containerID="c479483f183ae789c3cd336fae89b6744da0461ceb544ec2e7ab609c63079d04" Jan 24 00:35:06.762864 containerd[1990]: time="2026-01-24T00:35:06.762828002Z" level=info msg="CreateContainer within sandbox \"c436aca7a03f2a10c1401ae69755925302829fa093429061b40aa6cf0968bce7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 24 00:35:06.803092 containerd[1990]: time="2026-01-24T00:35:06.803048169Z" level=info msg="CreateContainer within sandbox \"c436aca7a03f2a10c1401ae69755925302829fa093429061b40aa6cf0968bce7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"89fe67618d04765979ee2256d41648a384491ff6ee6389375bbd04fb8d5fac08\"" Jan 24 00:35:06.803747 containerd[1990]: time="2026-01-24T00:35:06.803712398Z" level=info msg="StartContainer for \"89fe67618d04765979ee2256d41648a384491ff6ee6389375bbd04fb8d5fac08\"" Jan 24 00:35:06.838421 systemd[1]: Started cri-containerd-89fe67618d04765979ee2256d41648a384491ff6ee6389375bbd04fb8d5fac08.scope - libcontainer container 89fe67618d04765979ee2256d41648a384491ff6ee6389375bbd04fb8d5fac08. Jan 24 00:35:06.885571 containerd[1990]: time="2026-01-24T00:35:06.885516494Z" level=info msg="StartContainer for \"89fe67618d04765979ee2256d41648a384491ff6ee6389375bbd04fb8d5fac08\" returns successfully" Jan 24 00:35:07.608248 kubelet[3194]: E0124 00:35:07.608170 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-59955b8999-7njzz" podUID="8bd1b1e2-6c2f-496d-84df-3687f4a4a992" Jan 24 00:35:11.604412 kubelet[3194]: E0124 00:35:11.604262 3194 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-777f8fb74-d8qgp" podUID="0c7963cb-5f76-453d-b9ca-f28ed3f17ce0"