Dec 16 13:13:37.881876 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 16 13:13:37.881912 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:13:37.881929 kernel: BIOS-provided physical RAM map: Dec 16 13:13:37.881940 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 16 13:13:37.881950 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Dec 16 13:13:37.881960 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Dec 16 13:13:37.881972 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Dec 16 13:13:37.881983 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Dec 16 13:13:37.881994 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Dec 16 13:13:37.882005 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Dec 16 13:13:37.883686 kernel: NX (Execute Disable) protection: active Dec 16 13:13:37.883707 kernel: APIC: Static calls initialized Dec 16 13:13:37.883718 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Dec 16 13:13:37.883731 kernel: extended physical RAM map: Dec 16 13:13:37.883746 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 16 13:13:37.883758 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Dec 16 13:13:37.883774 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Dec 16 13:13:37.883786 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Dec 16 13:13:37.883799 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Dec 16 13:13:37.883812 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Dec 16 13:13:37.883824 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Dec 16 13:13:37.883837 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Dec 16 13:13:37.883849 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Dec 16 13:13:37.883862 kernel: efi: EFI v2.7 by EDK II Dec 16 13:13:37.883874 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77002518 Dec 16 13:13:37.883887 kernel: secureboot: Secure boot disabled Dec 16 13:13:37.883899 kernel: SMBIOS 2.7 present. Dec 16 13:13:37.883914 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 16 13:13:37.883927 kernel: DMI: Memory slots populated: 1/1 Dec 16 13:13:37.883939 kernel: Hypervisor detected: KVM Dec 16 13:13:37.883952 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Dec 16 13:13:37.883964 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 13:13:37.883977 kernel: kvm-clock: using sched offset of 5012444117 cycles Dec 16 13:13:37.883990 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 13:13:37.884003 kernel: tsc: Detected 2499.998 MHz processor Dec 16 13:13:37.884016 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 13:13:37.884029 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 13:13:37.884044 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Dec 16 13:13:37.884057 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 16 13:13:37.884071 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 13:13:37.884089 kernel: Using GB pages for direct mapping Dec 16 13:13:37.884103 kernel: ACPI: Early table checksum verification disabled Dec 16 13:13:37.884116 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Dec 16 13:13:37.884130 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Dec 16 13:13:37.884147 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 16 13:13:37.884161 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 16 13:13:37.884175 kernel: ACPI: FACS 0x00000000789D0000 000040 Dec 16 13:13:37.884188 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 16 13:13:37.884202 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 16 13:13:37.884216 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 16 13:13:37.884229 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 16 13:13:37.884243 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 16 13:13:37.884259 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 16 13:13:37.884273 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 16 13:13:37.884293 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Dec 16 13:13:37.884306 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Dec 16 13:13:37.884319 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Dec 16 13:13:37.884332 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Dec 16 13:13:37.884345 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Dec 16 13:13:37.884357 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Dec 16 13:13:37.884374 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Dec 16 13:13:37.884387 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Dec 16 13:13:37.884400 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Dec 16 13:13:37.884414 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Dec 16 13:13:37.884428 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Dec 16 13:13:37.884442 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Dec 16 13:13:37.884455 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 16 13:13:37.884469 kernel: NUMA: Initialized distance table, cnt=1 Dec 16 13:13:37.884482 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Dec 16 13:13:37.884498 kernel: Zone ranges: Dec 16 13:13:37.884512 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 13:13:37.884525 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Dec 16 13:13:37.884539 kernel: Normal empty Dec 16 13:13:37.884552 kernel: Device empty Dec 16 13:13:37.884566 kernel: Movable zone start for each node Dec 16 13:13:37.884579 kernel: Early memory node ranges Dec 16 13:13:37.884592 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 16 13:13:37.884606 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Dec 16 13:13:37.884620 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Dec 16 13:13:37.885726 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Dec 16 13:13:37.885743 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:13:37.885757 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 16 13:13:37.885772 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Dec 16 13:13:37.885786 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Dec 16 13:13:37.885801 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 16 13:13:37.885815 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 13:13:37.885827 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 16 13:13:37.885840 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 13:13:37.885854 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 13:13:37.885866 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 13:13:37.885878 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 13:13:37.885889 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 13:13:37.885902 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 16 13:13:37.885913 kernel: TSC deadline timer available Dec 16 13:13:37.885926 kernel: CPU topo: Max. logical packages: 1 Dec 16 13:13:37.885938 kernel: CPU topo: Max. logical dies: 1 Dec 16 13:13:37.885949 kernel: CPU topo: Max. dies per package: 1 Dec 16 13:13:37.885962 kernel: CPU topo: Max. threads per core: 2 Dec 16 13:13:37.885977 kernel: CPU topo: Num. cores per package: 1 Dec 16 13:13:37.885988 kernel: CPU topo: Num. threads per package: 2 Dec 16 13:13:37.886000 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 16 13:13:37.886013 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 13:13:37.886026 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Dec 16 13:13:37.886039 kernel: Booting paravirtualized kernel on KVM Dec 16 13:13:37.886052 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 13:13:37.886064 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 13:13:37.886078 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 16 13:13:37.886096 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 16 13:13:37.886110 kernel: pcpu-alloc: [0] 0 1 Dec 16 13:13:37.886124 kernel: kvm-guest: PV spinlocks enabled Dec 16 13:13:37.886139 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 13:13:37.886154 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:13:37.886167 kernel: random: crng init done Dec 16 13:13:37.886182 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 13:13:37.886197 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 13:13:37.886215 kernel: Fallback order for Node 0: 0 Dec 16 13:13:37.886227 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Dec 16 13:13:37.886240 kernel: Policy zone: DMA32 Dec 16 13:13:37.886263 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 13:13:37.886282 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 13:13:37.886296 kernel: Kernel/User page tables isolation: enabled Dec 16 13:13:37.886310 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 13:13:37.886325 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 13:13:37.886340 kernel: Dynamic Preempt: voluntary Dec 16 13:13:37.886356 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 13:13:37.886371 kernel: rcu: RCU event tracing is enabled. Dec 16 13:13:37.886385 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 13:13:37.886403 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 13:13:37.886419 kernel: Rude variant of Tasks RCU enabled. Dec 16 13:13:37.886433 kernel: Tracing variant of Tasks RCU enabled. Dec 16 13:13:37.886448 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 13:13:37.886462 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 13:13:37.886504 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:13:37.886520 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:13:37.886536 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:13:37.886550 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 16 13:13:37.886566 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 13:13:37.886581 kernel: Console: colour dummy device 80x25 Dec 16 13:13:37.886596 kernel: printk: legacy console [tty0] enabled Dec 16 13:13:37.886611 kernel: printk: legacy console [ttyS0] enabled Dec 16 13:13:37.886739 kernel: ACPI: Core revision 20240827 Dec 16 13:13:37.886760 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 16 13:13:37.886775 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 13:13:37.886790 kernel: x2apic enabled Dec 16 13:13:37.886805 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 13:13:37.886821 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 16 13:13:37.886837 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Dec 16 13:13:37.886852 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 16 13:13:37.886867 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Dec 16 13:13:37.886882 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 13:13:37.886900 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 13:13:37.886915 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 13:13:37.886930 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 16 13:13:37.886945 kernel: RETBleed: Vulnerable Dec 16 13:13:37.886959 kernel: Speculative Store Bypass: Vulnerable Dec 16 13:13:37.886974 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 16 13:13:37.886989 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 16 13:13:37.887004 kernel: GDS: Unknown: Dependent on hypervisor status Dec 16 13:13:37.887019 kernel: active return thunk: its_return_thunk Dec 16 13:13:37.887034 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 13:13:37.887049 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 13:13:37.887066 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 13:13:37.887079 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 13:13:37.887091 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 16 13:13:37.887104 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 16 13:13:37.887118 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 16 13:13:37.887132 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 16 13:13:37.887145 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 16 13:13:37.887159 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 16 13:13:37.887173 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 13:13:37.887186 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 16 13:13:37.887205 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 16 13:13:37.887218 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 16 13:13:37.887231 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 16 13:13:37.887245 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 16 13:13:37.887259 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 16 13:13:37.887273 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 16 13:13:37.887286 kernel: Freeing SMP alternatives memory: 32K Dec 16 13:13:37.887300 kernel: pid_max: default: 32768 minimum: 301 Dec 16 13:13:37.887314 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 13:13:37.887327 kernel: landlock: Up and running. Dec 16 13:13:37.887340 kernel: SELinux: Initializing. Dec 16 13:13:37.887354 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 13:13:37.887371 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 13:13:37.887386 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 16 13:13:37.887400 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 16 13:13:37.887414 kernel: signal: max sigframe size: 3632 Dec 16 13:13:37.887428 kernel: rcu: Hierarchical SRCU implementation. Dec 16 13:13:37.887442 kernel: rcu: Max phase no-delay instances is 400. Dec 16 13:13:37.887456 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 13:13:37.887470 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 13:13:37.887484 kernel: smp: Bringing up secondary CPUs ... Dec 16 13:13:37.887498 kernel: smpboot: x86: Booting SMP configuration: Dec 16 13:13:37.887515 kernel: .... node #0, CPUs: #1 Dec 16 13:13:37.887529 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 16 13:13:37.887545 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 16 13:13:37.887559 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 13:13:37.887573 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Dec 16 13:13:37.887587 kernel: Memory: 1899860K/2037804K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 133380K reserved, 0K cma-reserved) Dec 16 13:13:37.887601 kernel: devtmpfs: initialized Dec 16 13:13:37.887615 kernel: x86/mm: Memory block size: 128MB Dec 16 13:13:37.887644 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Dec 16 13:13:37.887658 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 13:13:37.887672 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 13:13:37.887686 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 13:13:37.887701 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 13:13:37.887715 kernel: audit: initializing netlink subsys (disabled) Dec 16 13:13:37.887729 kernel: audit: type=2000 audit(1765890815.182:1): state=initialized audit_enabled=0 res=1 Dec 16 13:13:37.887743 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 13:13:37.887757 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 13:13:37.887775 kernel: cpuidle: using governor menu Dec 16 13:13:37.887789 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 13:13:37.887803 kernel: dca service started, version 1.12.1 Dec 16 13:13:37.887817 kernel: PCI: Using configuration type 1 for base access Dec 16 13:13:37.887832 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 13:13:37.887846 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 13:13:37.887860 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 13:13:37.887874 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 13:13:37.887888 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 13:13:37.887905 kernel: ACPI: Added _OSI(Module Device) Dec 16 13:13:37.887919 kernel: ACPI: Added _OSI(Processor Device) Dec 16 13:13:37.887932 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 13:13:37.887947 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 16 13:13:37.887961 kernel: ACPI: Interpreter enabled Dec 16 13:13:37.887975 kernel: ACPI: PM: (supports S0 S5) Dec 16 13:13:37.887988 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 13:13:37.888002 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 13:13:37.888016 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 13:13:37.888033 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 16 13:13:37.888047 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 13:13:37.890680 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 16 13:13:37.890864 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 16 13:13:37.891001 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 16 13:13:37.891022 kernel: acpiphp: Slot [3] registered Dec 16 13:13:37.891038 kernel: acpiphp: Slot [4] registered Dec 16 13:13:37.891059 kernel: acpiphp: Slot [5] registered Dec 16 13:13:37.891075 kernel: acpiphp: Slot [6] registered Dec 16 13:13:37.891091 kernel: acpiphp: Slot [7] registered Dec 16 13:13:37.891107 kernel: acpiphp: Slot [8] registered Dec 16 13:13:37.891123 kernel: acpiphp: Slot [9] registered Dec 16 13:13:37.891139 kernel: acpiphp: Slot [10] registered Dec 16 13:13:37.891156 kernel: acpiphp: Slot [11] registered Dec 16 13:13:37.891195 kernel: acpiphp: Slot [12] registered Dec 16 13:13:37.891211 kernel: acpiphp: Slot [13] registered Dec 16 13:13:37.891228 kernel: acpiphp: Slot [14] registered Dec 16 13:13:37.891243 kernel: acpiphp: Slot [15] registered Dec 16 13:13:37.891258 kernel: acpiphp: Slot [16] registered Dec 16 13:13:37.891274 kernel: acpiphp: Slot [17] registered Dec 16 13:13:37.891289 kernel: acpiphp: Slot [18] registered Dec 16 13:13:37.891305 kernel: acpiphp: Slot [19] registered Dec 16 13:13:37.891321 kernel: acpiphp: Slot [20] registered Dec 16 13:13:37.891337 kernel: acpiphp: Slot [21] registered Dec 16 13:13:37.891352 kernel: acpiphp: Slot [22] registered Dec 16 13:13:37.891368 kernel: acpiphp: Slot [23] registered Dec 16 13:13:37.891387 kernel: acpiphp: Slot [24] registered Dec 16 13:13:37.891403 kernel: acpiphp: Slot [25] registered Dec 16 13:13:37.891419 kernel: acpiphp: Slot [26] registered Dec 16 13:13:37.891433 kernel: acpiphp: Slot [27] registered Dec 16 13:13:37.891446 kernel: acpiphp: Slot [28] registered Dec 16 13:13:37.891461 kernel: acpiphp: Slot [29] registered Dec 16 13:13:37.891476 kernel: acpiphp: Slot [30] registered Dec 16 13:13:37.891490 kernel: acpiphp: Slot [31] registered Dec 16 13:13:37.891505 kernel: PCI host bridge to bus 0000:00 Dec 16 13:13:37.891670 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 13:13:37.891789 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 13:13:37.891909 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 13:13:37.892026 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 16 13:13:37.892142 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Dec 16 13:13:37.892253 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 13:13:37.892421 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Dec 16 13:13:37.892564 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Dec 16 13:13:37.894712 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Dec 16 13:13:37.894866 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 16 13:13:37.895007 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 16 13:13:37.895144 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 16 13:13:37.895281 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 16 13:13:37.895426 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 16 13:13:37.895562 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 16 13:13:37.895724 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 16 13:13:37.895867 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Dec 16 13:13:37.896001 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Dec 16 13:13:37.896133 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Dec 16 13:13:37.896263 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 13:13:37.896423 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Dec 16 13:13:37.896554 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Dec 16 13:13:37.898773 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Dec 16 13:13:37.898930 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Dec 16 13:13:37.898950 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 13:13:37.898965 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 13:13:37.898979 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 13:13:37.898998 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 13:13:37.899012 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 16 13:13:37.899026 kernel: iommu: Default domain type: Translated Dec 16 13:13:37.899041 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 13:13:37.899055 kernel: efivars: Registered efivars operations Dec 16 13:13:37.899069 kernel: PCI: Using ACPI for IRQ routing Dec 16 13:13:37.899082 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 13:13:37.899095 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Dec 16 13:13:37.899116 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Dec 16 13:13:37.899133 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Dec 16 13:13:37.899280 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 16 13:13:37.899418 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 16 13:13:37.899557 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 13:13:37.899577 kernel: vgaarb: loaded Dec 16 13:13:37.899594 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 16 13:13:37.899610 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 16 13:13:37.899655 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 13:13:37.899675 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 13:13:37.899691 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 13:13:37.899707 kernel: pnp: PnP ACPI init Dec 16 13:13:37.899723 kernel: pnp: PnP ACPI: found 5 devices Dec 16 13:13:37.899757 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 13:13:37.899773 kernel: NET: Registered PF_INET protocol family Dec 16 13:13:37.899789 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 13:13:37.899805 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 16 13:13:37.899822 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 13:13:37.899841 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 13:13:37.899858 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 16 13:13:37.899874 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 16 13:13:37.899890 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 13:13:37.899906 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 13:13:37.899923 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 13:13:37.899938 kernel: NET: Registered PF_XDP protocol family Dec 16 13:13:37.900073 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 13:13:37.900197 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 13:13:37.900335 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 13:13:37.900455 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 16 13:13:37.900573 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Dec 16 13:13:37.902840 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 16 13:13:37.902879 kernel: PCI: CLS 0 bytes, default 64 Dec 16 13:13:37.902896 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 16 13:13:37.902913 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 16 13:13:37.902930 kernel: clocksource: Switched to clocksource tsc Dec 16 13:13:37.902952 kernel: Initialise system trusted keyrings Dec 16 13:13:37.902970 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 16 13:13:37.902985 kernel: Key type asymmetric registered Dec 16 13:13:37.903001 kernel: Asymmetric key parser 'x509' registered Dec 16 13:13:37.903017 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 13:13:37.903034 kernel: io scheduler mq-deadline registered Dec 16 13:13:37.903050 kernel: io scheduler kyber registered Dec 16 13:13:37.903066 kernel: io scheduler bfq registered Dec 16 13:13:37.903083 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 13:13:37.903103 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 13:13:37.903119 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:13:37.903135 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 13:13:37.903151 kernel: i8042: Warning: Keylock active Dec 16 13:13:37.903167 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 13:13:37.903184 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 13:13:37.903364 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 16 13:13:37.903508 kernel: rtc_cmos 00:00: registered as rtc0 Dec 16 13:13:37.905701 kernel: rtc_cmos 00:00: setting system clock to 2025-12-16T13:13:37 UTC (1765890817) Dec 16 13:13:37.905878 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 16 13:13:37.905927 kernel: intel_pstate: CPU model not supported Dec 16 13:13:37.905948 kernel: efifb: probing for efifb Dec 16 13:13:37.905965 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Dec 16 13:13:37.905982 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Dec 16 13:13:37.905999 kernel: efifb: scrolling: redraw Dec 16 13:13:37.906015 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 13:13:37.906032 kernel: Console: switching to colour frame buffer device 100x37 Dec 16 13:13:37.906054 kernel: fb0: EFI VGA frame buffer device Dec 16 13:13:37.906071 kernel: pstore: Using crash dump compression: deflate Dec 16 13:13:37.906088 kernel: pstore: Registered efi_pstore as persistent store backend Dec 16 13:13:37.906105 kernel: NET: Registered PF_INET6 protocol family Dec 16 13:13:37.906120 kernel: Segment Routing with IPv6 Dec 16 13:13:37.906134 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 13:13:37.906149 kernel: NET: Registered PF_PACKET protocol family Dec 16 13:13:37.906164 kernel: Key type dns_resolver registered Dec 16 13:13:37.906180 kernel: IPI shorthand broadcast: enabled Dec 16 13:13:37.906198 kernel: sched_clock: Marking stable (2567003289, 151895599)->(2808116802, -89217914) Dec 16 13:13:37.906213 kernel: registered taskstats version 1 Dec 16 13:13:37.906233 kernel: Loading compiled-in X.509 certificates Dec 16 13:13:37.906249 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 16 13:13:37.906264 kernel: Demotion targets for Node 0: null Dec 16 13:13:37.906279 kernel: Key type .fscrypt registered Dec 16 13:13:37.906294 kernel: Key type fscrypt-provisioning registered Dec 16 13:13:37.906309 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 13:13:37.906325 kernel: ima: Allocated hash algorithm: sha1 Dec 16 13:13:37.906344 kernel: ima: No architecture policies found Dec 16 13:13:37.906359 kernel: clk: Disabling unused clocks Dec 16 13:13:37.906375 kernel: Warning: unable to open an initial console. Dec 16 13:13:37.906391 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 16 13:13:37.906409 kernel: Write protecting the kernel read-only data: 40960k Dec 16 13:13:37.906430 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 16 13:13:37.906452 kernel: Run /init as init process Dec 16 13:13:37.906470 kernel: with arguments: Dec 16 13:13:37.906487 kernel: /init Dec 16 13:13:37.906505 kernel: with environment: Dec 16 13:13:37.906523 kernel: HOME=/ Dec 16 13:13:37.906540 kernel: TERM=linux Dec 16 13:13:37.906559 systemd[1]: Successfully made /usr/ read-only. Dec 16 13:13:37.906583 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:13:37.906605 systemd[1]: Detected virtualization amazon. Dec 16 13:13:37.906652 systemd[1]: Detected architecture x86-64. Dec 16 13:13:37.906683 systemd[1]: Running in initrd. Dec 16 13:13:37.906698 systemd[1]: No hostname configured, using default hostname. Dec 16 13:13:37.906713 systemd[1]: Hostname set to . Dec 16 13:13:37.906727 systemd[1]: Initializing machine ID from VM UUID. Dec 16 13:13:37.906744 systemd[1]: Queued start job for default target initrd.target. Dec 16 13:13:37.906767 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:13:37.906786 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:13:37.906807 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 13:13:37.906826 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:13:37.906845 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 13:13:37.906865 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 13:13:37.906885 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 13:13:37.906906 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 13:13:37.906920 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:13:37.906938 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:13:37.906955 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:13:37.906971 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:13:37.906987 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:13:37.907004 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:13:37.907020 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:13:37.907040 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:13:37.907056 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 13:13:37.907072 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 13:13:37.907087 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:13:37.907104 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:13:37.907122 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:13:37.907140 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:13:37.907158 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 13:13:37.907177 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:13:37.907200 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 13:13:37.907218 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 13:13:37.907237 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 13:13:37.907253 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:13:37.907270 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:13:37.907286 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:13:37.907301 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 13:13:37.907358 systemd-journald[188]: Collecting audit messages is disabled. Dec 16 13:13:37.907399 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:13:37.907416 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 13:13:37.907432 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:13:37.907449 systemd-journald[188]: Journal started Dec 16 13:13:37.907483 systemd-journald[188]: Runtime Journal (/run/log/journal/ec257767104d1a40269e9e82fcab5481) is 4.7M, max 38.1M, 33.3M free. Dec 16 13:13:37.890671 systemd-modules-load[189]: Inserted module 'overlay' Dec 16 13:13:37.914831 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:13:37.919518 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:13:37.926916 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 13:13:37.933841 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:13:37.942704 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:13:37.950657 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 13:13:37.952844 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:13:37.958291 kernel: Bridge firewalling registered Dec 16 13:13:37.957510 systemd-modules-load[189]: Inserted module 'br_netfilter' Dec 16 13:13:37.959320 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:13:37.969382 systemd-tmpfiles[205]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 13:13:37.971820 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:13:37.976448 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:13:37.981335 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:13:37.989820 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:13:37.992213 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 13:13:37.997340 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:13:38.004987 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:13:38.009693 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Dec 16 13:13:38.020099 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:13:38.046261 systemd-resolved[230]: Positive Trust Anchors: Dec 16 13:13:38.046897 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:13:38.046936 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:13:38.052460 systemd-resolved[230]: Defaulting to hostname 'linux'. Dec 16 13:13:38.053509 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:13:38.053992 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:13:38.115666 kernel: SCSI subsystem initialized Dec 16 13:13:38.125650 kernel: Loading iSCSI transport class v2.0-870. Dec 16 13:13:38.136671 kernel: iscsi: registered transport (tcp) Dec 16 13:13:38.159030 kernel: iscsi: registered transport (qla4xxx) Dec 16 13:13:38.159115 kernel: QLogic iSCSI HBA Driver Dec 16 13:13:38.179074 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:13:38.202204 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:13:38.204595 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:13:38.248885 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 13:13:38.251006 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 13:13:38.307660 kernel: raid6: avx512x4 gen() 18083 MB/s Dec 16 13:13:38.325664 kernel: raid6: avx512x2 gen() 17937 MB/s Dec 16 13:13:38.343657 kernel: raid6: avx512x1 gen() 17800 MB/s Dec 16 13:13:38.361670 kernel: raid6: avx2x4 gen() 18002 MB/s Dec 16 13:13:38.379655 kernel: raid6: avx2x2 gen() 17978 MB/s Dec 16 13:13:38.397897 kernel: raid6: avx2x1 gen() 13914 MB/s Dec 16 13:13:38.397957 kernel: raid6: using algorithm avx512x4 gen() 18083 MB/s Dec 16 13:13:38.416863 kernel: raid6: .... xor() 7472 MB/s, rmw enabled Dec 16 13:13:38.416937 kernel: raid6: using avx512x2 recovery algorithm Dec 16 13:13:38.437664 kernel: xor: automatically using best checksumming function avx Dec 16 13:13:38.607662 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 13:13:38.614387 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:13:38.616672 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:13:38.643888 systemd-udevd[439]: Using default interface naming scheme 'v255'. Dec 16 13:13:38.650711 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:13:38.655892 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 13:13:38.680859 dracut-pre-trigger[445]: rd.md=0: removing MD RAID activation Dec 16 13:13:38.707935 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:13:38.710194 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:13:38.772553 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:13:38.777376 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 13:13:38.863659 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Dec 16 13:13:38.868648 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 13:13:38.874778 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 16 13:13:38.875036 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 16 13:13:38.886656 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 16 13:13:38.897305 kernel: AES CTR mode by8 optimization enabled Dec 16 13:13:38.902686 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:2f:79:43:9d:ed Dec 16 13:13:38.917970 (udev-worker)[485]: Network interface NamePolicy= disabled on kernel command line. Dec 16 13:13:38.930114 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:13:38.930291 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:13:38.933154 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:13:38.936946 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:13:38.940261 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:13:38.946653 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 16 13:13:38.949647 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 16 13:13:38.965028 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:13:38.965166 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:13:38.968803 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 16 13:13:38.969175 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:13:38.970485 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:13:38.976870 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 13:13:38.976939 kernel: GPT:9289727 != 33554431 Dec 16 13:13:38.976958 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 13:13:38.976976 kernel: GPT:9289727 != 33554431 Dec 16 13:13:38.976992 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 13:13:38.977009 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:13:39.002748 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:13:39.014667 kernel: nvme nvme0: using unchecked data buffer Dec 16 13:13:39.090160 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 16 13:13:39.119515 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Dec 16 13:13:39.121484 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 16 13:13:39.125783 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 13:13:39.151331 disk-uuid[668]: Primary Header is updated. Dec 16 13:13:39.151331 disk-uuid[668]: Secondary Entries is updated. Dec 16 13:13:39.151331 disk-uuid[668]: Secondary Header is updated. Dec 16 13:13:39.161548 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 16 13:13:39.162561 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 13:13:39.166551 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:13:39.184039 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:13:39.186050 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:13:39.185250 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:13:39.186705 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:13:39.190608 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 13:13:39.215379 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:13:39.445094 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 16 13:13:40.192706 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 13:13:40.192765 disk-uuid[670]: The operation has completed successfully. Dec 16 13:13:40.313738 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 13:13:40.313888 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 13:13:40.344928 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 13:13:40.366456 sh[940]: Success Dec 16 13:13:40.393864 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 13:13:40.393944 kernel: device-mapper: uevent: version 1.0.3 Dec 16 13:13:40.398653 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 13:13:40.407654 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Dec 16 13:13:40.520764 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:13:40.524727 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 13:13:40.541228 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 13:13:40.560654 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (963) Dec 16 13:13:40.565399 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 16 13:13:40.565474 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:13:40.594299 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 13:13:40.594375 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 13:13:40.594389 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 13:13:40.610324 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 13:13:40.611619 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:13:40.612677 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 13:13:40.613769 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 13:13:40.617796 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 13:13:40.655664 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (997) Dec 16 13:13:40.661877 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:13:40.661960 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:13:40.673187 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:13:40.673265 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:13:40.680743 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:13:40.681451 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 13:13:40.684053 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 13:13:40.725059 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:13:40.727615 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:13:40.796061 systemd-networkd[1132]: lo: Link UP Dec 16 13:13:40.796833 systemd-networkd[1132]: lo: Gained carrier Dec 16 13:13:40.798086 systemd-networkd[1132]: Enumeration completed Dec 16 13:13:40.798406 systemd-networkd[1132]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:13:40.798410 systemd-networkd[1132]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:13:40.799110 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:13:40.800527 systemd[1]: Reached target network.target - Network. Dec 16 13:13:40.804014 systemd-networkd[1132]: eth0: Link UP Dec 16 13:13:40.804028 systemd-networkd[1132]: eth0: Gained carrier Dec 16 13:13:40.804042 systemd-networkd[1132]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:13:40.821878 systemd-networkd[1132]: eth0: DHCPv4 address 172.31.28.249/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 16 13:13:40.870234 ignition[1081]: Ignition 2.22.0 Dec 16 13:13:40.870668 ignition[1081]: Stage: fetch-offline Dec 16 13:13:40.870859 ignition[1081]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:13:40.873437 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:13:40.870867 ignition[1081]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:13:40.871453 ignition[1081]: Ignition finished successfully Dec 16 13:13:40.875543 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 13:13:40.903704 ignition[1142]: Ignition 2.22.0 Dec 16 13:13:40.903717 ignition[1142]: Stage: fetch Dec 16 13:13:40.903993 ignition[1142]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:13:40.904002 ignition[1142]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:13:40.904077 ignition[1142]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:13:40.912257 ignition[1142]: PUT result: OK Dec 16 13:13:40.913764 ignition[1142]: parsed url from cmdline: "" Dec 16 13:13:40.913774 ignition[1142]: no config URL provided Dec 16 13:13:40.913782 ignition[1142]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:13:40.913792 ignition[1142]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:13:40.913807 ignition[1142]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:13:40.914405 ignition[1142]: PUT result: OK Dec 16 13:13:40.914446 ignition[1142]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 16 13:13:40.915043 ignition[1142]: GET result: OK Dec 16 13:13:40.915112 ignition[1142]: parsing config with SHA512: f0f2cff80cf96b98eb866177da6fe1e74045bcc11d2dc4fe24a3699ddb1621637275af2a693e011fa8adc2472a02b1afc237a7393221429ad2925615fa08bd68 Dec 16 13:13:40.918688 unknown[1142]: fetched base config from "system" Dec 16 13:13:40.918697 unknown[1142]: fetched base config from "system" Dec 16 13:13:40.918988 ignition[1142]: fetch: fetch complete Dec 16 13:13:40.918702 unknown[1142]: fetched user config from "aws" Dec 16 13:13:40.918992 ignition[1142]: fetch: fetch passed Dec 16 13:13:40.919034 ignition[1142]: Ignition finished successfully Dec 16 13:13:40.921295 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 13:13:40.922591 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 13:13:40.955479 ignition[1148]: Ignition 2.22.0 Dec 16 13:13:40.955497 ignition[1148]: Stage: kargs Dec 16 13:13:40.955903 ignition[1148]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:13:40.955916 ignition[1148]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:13:40.956033 ignition[1148]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:13:40.957086 ignition[1148]: PUT result: OK Dec 16 13:13:40.959412 ignition[1148]: kargs: kargs passed Dec 16 13:13:40.959486 ignition[1148]: Ignition finished successfully Dec 16 13:13:40.961754 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 13:13:40.963220 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 13:13:40.993824 ignition[1154]: Ignition 2.22.0 Dec 16 13:13:40.993840 ignition[1154]: Stage: disks Dec 16 13:13:40.994208 ignition[1154]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:13:40.994220 ignition[1154]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:13:40.994344 ignition[1154]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:13:40.995214 ignition[1154]: PUT result: OK Dec 16 13:13:40.997585 ignition[1154]: disks: disks passed Dec 16 13:13:40.997675 ignition[1154]: Ignition finished successfully Dec 16 13:13:40.999652 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 13:13:41.000252 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 13:13:41.000729 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 13:13:41.001256 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:13:41.001834 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:13:41.002381 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:13:41.004069 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 13:13:41.054599 systemd-fsck[1163]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 16 13:13:41.057936 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 13:13:41.060771 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 13:13:41.227692 kernel: EXT4-fs (nvme0n1p9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 16 13:13:41.227692 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 13:13:41.230467 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 13:13:41.233754 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:13:41.236729 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 13:13:41.240791 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 13:13:41.241808 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 13:13:41.241850 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:13:41.250899 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 13:13:41.252964 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 13:13:41.269732 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1182) Dec 16 13:13:41.274502 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:13:41.274572 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:13:41.282615 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:13:41.282701 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:13:41.285055 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:13:41.354959 initrd-setup-root[1208]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 13:13:41.361460 initrd-setup-root[1215]: cut: /sysroot/etc/group: No such file or directory Dec 16 13:13:41.366525 initrd-setup-root[1222]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 13:13:41.370540 initrd-setup-root[1229]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 13:13:41.478594 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 13:13:41.480395 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 13:13:41.481993 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 13:13:41.501677 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:13:41.530637 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 13:13:41.531281 ignition[1296]: INFO : Ignition 2.22.0 Dec 16 13:13:41.531281 ignition[1296]: INFO : Stage: mount Dec 16 13:13:41.532760 ignition[1296]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:13:41.532760 ignition[1296]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:13:41.532760 ignition[1296]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:13:41.533863 ignition[1296]: INFO : PUT result: OK Dec 16 13:13:41.535560 ignition[1296]: INFO : mount: mount passed Dec 16 13:13:41.535912 ignition[1296]: INFO : Ignition finished successfully Dec 16 13:13:41.537188 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 13:13:41.538895 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 13:13:41.559399 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 13:13:41.561473 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:13:41.601650 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1309) Dec 16 13:13:41.609735 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:13:41.609814 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:13:41.618209 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 13:13:41.618287 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 13:13:41.620214 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:13:41.656618 ignition[1326]: INFO : Ignition 2.22.0 Dec 16 13:13:41.656618 ignition[1326]: INFO : Stage: files Dec 16 13:13:41.658179 ignition[1326]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:13:41.658179 ignition[1326]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:13:41.658179 ignition[1326]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:13:41.658179 ignition[1326]: INFO : PUT result: OK Dec 16 13:13:41.660788 ignition[1326]: DEBUG : files: compiled without relabeling support, skipping Dec 16 13:13:41.662561 ignition[1326]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 13:13:41.662561 ignition[1326]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 13:13:41.667081 ignition[1326]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 13:13:41.668045 ignition[1326]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 13:13:41.669111 ignition[1326]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 13:13:41.668587 unknown[1326]: wrote ssh authorized keys file for user: core Dec 16 13:13:41.671243 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:13:41.672105 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 13:13:41.771666 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 13:13:41.892107 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 13:13:41.892107 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 13:13:41.894610 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 13:13:41.894610 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:13:41.894610 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:13:41.894610 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:13:41.894610 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:13:41.894610 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:13:41.894610 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:13:41.900557 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:13:41.900557 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:13:41.900557 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:13:41.903777 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:13:41.905352 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:13:41.905352 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Dec 16 13:13:42.328983 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 13:13:42.628781 systemd-networkd[1132]: eth0: Gained IPv6LL Dec 16 13:13:42.725256 ignition[1326]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Dec 16 13:13:42.725256 ignition[1326]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 13:13:42.727130 ignition[1326]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:13:42.731637 ignition[1326]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:13:42.731637 ignition[1326]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 13:13:42.733489 ignition[1326]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 13:13:42.733489 ignition[1326]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 13:13:42.733489 ignition[1326]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:13:42.733489 ignition[1326]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:13:42.733489 ignition[1326]: INFO : files: files passed Dec 16 13:13:42.733489 ignition[1326]: INFO : Ignition finished successfully Dec 16 13:13:42.733999 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 13:13:42.735470 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 13:13:42.739111 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 13:13:42.747510 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 13:13:42.748171 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 13:13:42.757274 initrd-setup-root-after-ignition[1355]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:13:42.757274 initrd-setup-root-after-ignition[1355]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:13:42.759767 initrd-setup-root-after-ignition[1359]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:13:42.762179 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:13:42.762793 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 13:13:42.764475 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 13:13:42.813972 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 13:13:42.814096 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 13:13:42.815198 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 13:13:42.816142 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 13:13:42.817028 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 13:13:42.817889 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 13:13:42.844122 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:13:42.846495 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 13:13:42.868744 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:13:42.869513 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:13:42.870600 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 13:13:42.871483 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 13:13:42.871743 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:13:42.872988 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 13:13:42.873869 systemd[1]: Stopped target basic.target - Basic System. Dec 16 13:13:42.874590 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 13:13:42.875436 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:13:42.876203 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 13:13:42.877121 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:13:42.877907 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 13:13:42.878638 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:13:42.879469 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 13:13:42.880758 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 13:13:42.881540 systemd[1]: Stopped target swap.target - Swaps. Dec 16 13:13:42.882278 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 13:13:42.882502 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:13:42.883537 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:13:42.884448 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:13:42.885116 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 13:13:42.885448 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:13:42.885985 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 13:13:42.886195 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 13:13:42.887518 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 13:13:42.887782 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:13:42.888575 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 13:13:42.888743 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 13:13:42.891745 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 13:13:42.892409 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 13:13:42.892653 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:13:42.895565 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 13:13:42.896093 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 13:13:42.896273 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:13:42.899667 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 13:13:42.899840 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:13:42.912022 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 13:13:42.912176 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 13:13:42.931116 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 13:13:42.936483 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 13:13:42.936619 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 13:13:42.939347 ignition[1379]: INFO : Ignition 2.22.0 Dec 16 13:13:42.939347 ignition[1379]: INFO : Stage: umount Dec 16 13:13:42.939347 ignition[1379]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:13:42.939347 ignition[1379]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 13:13:42.939347 ignition[1379]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 13:13:42.942096 ignition[1379]: INFO : PUT result: OK Dec 16 13:13:42.943334 ignition[1379]: INFO : umount: umount passed Dec 16 13:13:42.943839 ignition[1379]: INFO : Ignition finished successfully Dec 16 13:13:42.945074 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 13:13:42.945206 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 13:13:42.946420 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 13:13:42.946543 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 13:13:42.947002 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 13:13:42.947064 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 13:13:42.947668 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 13:13:42.947726 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 13:13:42.948286 systemd[1]: Stopped target network.target - Network. Dec 16 13:13:42.949009 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 13:13:42.949073 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:13:42.949685 systemd[1]: Stopped target paths.target - Path Units. Dec 16 13:13:42.950247 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 13:13:42.953685 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:13:42.954047 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 13:13:42.954967 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 13:13:42.955584 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 13:13:42.955669 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:13:42.956241 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 13:13:42.956292 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:13:42.956933 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 13:13:42.957008 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 13:13:42.957579 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 13:13:42.957660 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 13:13:42.958218 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 13:13:42.958282 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 13:13:42.959040 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 13:13:42.959664 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 13:13:42.963652 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 13:13:42.963897 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 13:13:42.967223 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 13:13:42.969134 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 13:13:42.969216 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:13:42.971969 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:13:42.972288 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 13:13:42.972572 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 13:13:42.974660 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 13:13:42.975757 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 13:13:42.976180 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 13:13:42.976233 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:13:42.978004 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 13:13:42.980139 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 13:13:42.980209 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:13:42.980928 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:13:42.980984 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:13:42.983771 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 13:13:42.983830 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 13:13:42.984547 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:13:42.987370 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 13:13:43.002285 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 13:13:43.002450 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 13:13:43.003786 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 13:13:43.003983 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:13:43.005230 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 13:13:43.005314 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 13:13:43.006548 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 13:13:43.006598 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:13:43.007251 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 13:13:43.007313 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:13:43.008708 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 13:13:43.008768 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 13:13:43.009834 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 13:13:43.009903 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:13:43.011972 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 13:13:43.013192 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 13:13:43.013267 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:13:43.015865 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 13:13:43.015934 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:13:43.017515 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:13:43.017582 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:13:43.028581 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 13:13:43.028728 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 13:13:43.030362 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 13:13:43.031955 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 13:13:43.059899 systemd[1]: Switching root. Dec 16 13:13:43.094275 systemd-journald[188]: Journal stopped Dec 16 13:13:44.512932 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Dec 16 13:13:44.512996 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 13:13:44.513013 kernel: SELinux: policy capability open_perms=1 Dec 16 13:13:44.513028 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 13:13:44.513040 kernel: SELinux: policy capability always_check_network=0 Dec 16 13:13:44.513052 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 13:13:44.513067 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 13:13:44.513079 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 13:13:44.513093 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 13:13:44.513105 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 13:13:44.513117 kernel: audit: type=1403 audit(1765890823.465:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 13:13:44.513130 systemd[1]: Successfully loaded SELinux policy in 65.031ms. Dec 16 13:13:44.513152 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.475ms. Dec 16 13:13:44.513169 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:13:44.513187 systemd[1]: Detected virtualization amazon. Dec 16 13:13:44.513199 systemd[1]: Detected architecture x86-64. Dec 16 13:13:44.513213 systemd[1]: Detected first boot. Dec 16 13:13:44.513226 systemd[1]: Initializing machine ID from VM UUID. Dec 16 13:13:44.513238 zram_generator::config[1424]: No configuration found. Dec 16 13:13:44.513253 kernel: Guest personality initialized and is inactive Dec 16 13:13:44.513264 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 13:13:44.513277 kernel: Initialized host personality Dec 16 13:13:44.513288 kernel: NET: Registered PF_VSOCK protocol family Dec 16 13:13:44.513300 systemd[1]: Populated /etc with preset unit settings. Dec 16 13:13:44.513316 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 13:13:44.513331 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 13:13:44.513343 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 13:13:44.513355 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 13:13:44.513367 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 13:13:44.513379 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 13:13:44.513392 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 13:13:44.513404 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 13:13:44.513416 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 13:13:44.513432 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 13:13:44.513444 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 13:13:44.513457 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 13:13:44.513470 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:13:44.513483 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:13:44.513495 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 13:13:44.513507 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 13:13:44.513520 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 13:13:44.513534 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:13:44.513546 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 13:13:44.513558 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:13:44.513571 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:13:44.513583 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 13:13:44.513595 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 13:13:44.513608 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 13:13:44.518437 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 13:13:44.518486 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:13:44.518506 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:13:44.518519 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:13:44.518532 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:13:44.518543 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 13:13:44.518556 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 13:13:44.518568 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 13:13:44.518581 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:13:44.518593 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:13:44.518605 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:13:44.518620 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 13:13:44.518661 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 13:13:44.518674 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 13:13:44.518688 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 13:13:44.518701 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:13:44.518714 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 13:13:44.518726 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 13:13:44.518738 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 13:13:44.518752 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 13:13:44.518767 systemd[1]: Reached target machines.target - Containers. Dec 16 13:13:44.518780 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 13:13:44.518792 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:13:44.518804 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:13:44.518817 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 13:13:44.518831 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:13:44.518843 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:13:44.518856 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:13:44.518871 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 13:13:44.518883 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:13:44.518896 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 13:13:44.518908 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 13:13:44.518920 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 13:13:44.518932 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 13:13:44.518944 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 13:13:44.518957 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:13:44.518972 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:13:44.518985 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:13:44.518998 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:13:44.519010 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 13:13:44.519023 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 13:13:44.519035 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:13:44.519049 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 13:13:44.519063 systemd[1]: Stopped verity-setup.service. Dec 16 13:13:44.519077 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:13:44.519090 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 13:13:44.519103 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 13:13:44.519117 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 13:13:44.519130 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 13:13:44.519142 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 13:13:44.519192 systemd-journald[1507]: Collecting audit messages is disabled. Dec 16 13:13:44.519220 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 13:13:44.519233 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:13:44.519245 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 13:13:44.519261 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 13:13:44.519274 systemd-journald[1507]: Journal started Dec 16 13:13:44.519300 systemd-journald[1507]: Runtime Journal (/run/log/journal/ec257767104d1a40269e9e82fcab5481) is 4.7M, max 38.1M, 33.3M free. Dec 16 13:13:44.233497 systemd[1]: Queued start job for default target multi-user.target. Dec 16 13:13:44.523203 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:13:44.242883 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 16 13:13:44.243403 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 13:13:44.523035 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:13:44.523697 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:13:44.524488 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:13:44.524935 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:13:44.526671 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:13:44.527339 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 13:13:44.530821 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:13:44.531529 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 13:13:44.550075 kernel: fuse: init (API version 7.41) Dec 16 13:13:44.548458 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 13:13:44.551923 kernel: loop: module loaded Dec 16 13:13:44.550726 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 13:13:44.554203 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:13:44.554948 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:13:44.566445 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:13:44.571772 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 13:13:44.577754 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 13:13:44.578505 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 13:13:44.578556 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:13:44.582547 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 13:13:44.592839 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 13:13:44.593947 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:13:44.597256 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 13:13:44.602897 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 13:13:44.604380 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:13:44.607907 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 13:13:44.609762 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:13:44.617858 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:13:44.622902 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 13:13:44.627188 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 13:13:44.628044 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 13:13:44.636492 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 13:13:44.639869 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 13:13:44.692056 systemd-journald[1507]: Time spent on flushing to /var/log/journal/ec257767104d1a40269e9e82fcab5481 is 46.414ms for 1014 entries. Dec 16 13:13:44.692056 systemd-journald[1507]: System Journal (/var/log/journal/ec257767104d1a40269e9e82fcab5481) is 8M, max 195.6M, 187.6M free. Dec 16 13:13:44.779926 systemd-journald[1507]: Received client request to flush runtime journal. Dec 16 13:13:44.779995 kernel: loop0: detected capacity change from 0 to 219144 Dec 16 13:13:44.780021 kernel: ACPI: bus type drm_connector registered Dec 16 13:13:44.699253 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 13:13:44.702606 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 13:13:44.718026 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 13:13:44.720672 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:13:44.720912 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:13:44.755749 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:13:44.762425 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:13:44.782187 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 13:13:44.805079 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 13:13:44.821678 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 13:13:44.850169 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 13:13:44.855429 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:13:44.867000 kernel: loop1: detected capacity change from 0 to 110984 Dec 16 13:13:44.908739 systemd-tmpfiles[1575]: ACLs are not supported, ignoring. Dec 16 13:13:44.909139 systemd-tmpfiles[1575]: ACLs are not supported, ignoring. Dec 16 13:13:44.919664 kernel: loop2: detected capacity change from 0 to 72368 Dec 16 13:13:44.920429 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:13:45.065665 kernel: loop3: detected capacity change from 0 to 128560 Dec 16 13:13:45.143664 kernel: loop4: detected capacity change from 0 to 219144 Dec 16 13:13:45.186692 kernel: loop5: detected capacity change from 0 to 110984 Dec 16 13:13:45.225665 kernel: loop6: detected capacity change from 0 to 72368 Dec 16 13:13:45.245464 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 13:13:45.271676 kernel: loop7: detected capacity change from 0 to 128560 Dec 16 13:13:45.320990 (sd-merge)[1582]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Dec 16 13:13:45.321878 (sd-merge)[1582]: Merged extensions into '/usr'. Dec 16 13:13:45.333301 systemd[1]: Reload requested from client PID 1556 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 13:13:45.333467 systemd[1]: Reloading... Dec 16 13:13:45.488673 zram_generator::config[1611]: No configuration found. Dec 16 13:13:45.613648 ldconfig[1551]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 13:13:45.807426 systemd[1]: Reloading finished in 473 ms. Dec 16 13:13:45.828456 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 13:13:45.829324 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 13:13:45.830080 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 13:13:45.845350 systemd[1]: Starting ensure-sysext.service... Dec 16 13:13:45.847110 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:13:45.851196 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:13:45.864900 systemd[1]: Reload requested from client PID 1661 ('systemctl') (unit ensure-sysext.service)... Dec 16 13:13:45.864920 systemd[1]: Reloading... Dec 16 13:13:45.886252 systemd-tmpfiles[1662]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 13:13:45.887135 systemd-tmpfiles[1662]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 13:13:45.887579 systemd-tmpfiles[1662]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 13:13:45.887927 systemd-tmpfiles[1662]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 13:13:45.889010 systemd-tmpfiles[1662]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 13:13:45.889377 systemd-tmpfiles[1662]: ACLs are not supported, ignoring. Dec 16 13:13:45.889484 systemd-tmpfiles[1662]: ACLs are not supported, ignoring. Dec 16 13:13:45.896110 systemd-tmpfiles[1662]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:13:45.896127 systemd-tmpfiles[1662]: Skipping /boot Dec 16 13:13:45.896715 systemd-udevd[1663]: Using default interface naming scheme 'v255'. Dec 16 13:13:45.916316 systemd-tmpfiles[1662]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:13:45.916422 systemd-tmpfiles[1662]: Skipping /boot Dec 16 13:13:45.942651 zram_generator::config[1686]: No configuration found. Dec 16 13:13:46.066159 (udev-worker)[1706]: Network interface NamePolicy= disabled on kernel command line. Dec 16 13:13:46.199645 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 13:13:46.252646 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 16 13:13:46.258547 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 13:13:46.258963 systemd[1]: Reloading finished in 393 ms. Dec 16 13:13:46.269061 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:13:46.270927 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:13:46.281654 kernel: ACPI: button: Power Button [PWRF] Dec 16 13:13:46.286677 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Dec 16 13:13:46.297763 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:13:46.299974 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 13:13:46.302808 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 13:13:46.305048 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:13:46.312835 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:13:46.314264 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 13:13:46.326426 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:13:46.326672 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:13:46.328898 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:13:46.336993 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:13:46.348963 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:13:46.349748 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:13:46.349930 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:13:46.350079 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:13:46.363713 kernel: ACPI: button: Sleep Button [SLPF] Dec 16 13:13:46.358213 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 13:13:46.372718 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 13:13:46.375520 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:13:46.377957 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:13:46.386566 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:13:46.387140 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:13:46.387951 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:13:46.388226 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:13:46.388569 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:13:46.401513 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:13:46.402071 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:13:46.407067 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:13:46.411281 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:13:46.412643 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:13:46.413173 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:13:46.413454 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 13:13:46.415222 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:13:46.417321 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:13:46.417647 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:13:46.421601 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:13:46.423355 systemd[1]: Finished ensure-sysext.service. Dec 16 13:13:46.441545 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:13:46.441869 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:13:46.457596 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 13:13:46.464870 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 13:13:46.521346 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:13:46.521662 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:13:46.523097 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:13:46.523914 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:13:46.525914 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:13:46.540668 augenrules[1900]: No rules Dec 16 13:13:46.539828 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 13:13:46.541916 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 13:13:46.543258 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:13:46.544531 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:13:46.548649 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Dec 16 13:13:46.560501 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 13:13:46.583034 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 13:13:46.664935 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:13:46.701165 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 16 13:13:46.708620 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 13:13:46.716593 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:13:46.717885 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:13:46.725214 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:13:46.739405 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:13:46.762790 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 13:13:46.861254 systemd-networkd[1850]: lo: Link UP Dec 16 13:13:46.862686 systemd-networkd[1850]: lo: Gained carrier Dec 16 13:13:46.867804 systemd-networkd[1850]: Enumeration completed Dec 16 13:13:46.867958 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:13:46.869147 systemd-networkd[1850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:13:46.869153 systemd-networkd[1850]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:13:46.873434 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 13:13:46.875531 systemd-resolved[1854]: Positive Trust Anchors: Dec 16 13:13:46.875544 systemd-resolved[1854]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:13:46.875602 systemd-resolved[1854]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:13:46.879429 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 13:13:46.882842 systemd-resolved[1854]: Defaulting to hostname 'linux'. Dec 16 13:13:46.884558 systemd-networkd[1850]: eth0: Link UP Dec 16 13:13:46.884863 systemd-networkd[1850]: eth0: Gained carrier Dec 16 13:13:46.884896 systemd-networkd[1850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:13:46.900707 systemd-networkd[1850]: eth0: DHCPv4 address 172.31.28.249/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 16 13:13:46.901800 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:13:46.902500 systemd[1]: Reached target network.target - Network. Dec 16 13:13:46.903099 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:13:46.929411 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:13:46.930321 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:13:46.931080 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 13:13:46.931745 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 13:13:46.932323 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 13:13:46.933093 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 13:13:46.934002 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 13:13:46.934572 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 13:13:46.935084 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 13:13:46.935128 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:13:46.935682 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:13:46.937912 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 13:13:46.941488 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 13:13:46.947249 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 13:13:46.948103 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 13:13:46.948837 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 13:13:46.953298 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 13:13:46.955278 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 13:13:46.957083 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 13:13:46.957709 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 13:13:46.959455 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:13:46.960041 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:13:46.960527 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:13:46.960566 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:13:46.961953 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 13:13:46.964263 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 13:13:46.967821 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 13:13:46.973908 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 13:13:46.977059 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 13:13:46.981871 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 13:13:46.982753 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 13:13:46.984920 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 13:13:46.990881 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 13:13:46.997878 systemd[1]: Started ntpd.service - Network Time Service. Dec 16 13:13:47.001240 jq[1948]: false Dec 16 13:13:47.001219 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 13:13:47.032960 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 16 13:13:47.036544 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 13:13:47.040825 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 13:13:47.059061 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 13:13:47.062018 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 13:13:47.062782 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 13:13:47.064907 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 13:13:47.072320 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 13:13:47.074187 extend-filesystems[1949]: Found /dev/nvme0n1p6 Dec 16 13:13:47.083762 oslogin_cache_refresh[1950]: Refreshing passwd entry cache Dec 16 13:13:47.088046 google_oslogin_nss_cache[1950]: oslogin_cache_refresh[1950]: Refreshing passwd entry cache Dec 16 13:13:47.093281 google_oslogin_nss_cache[1950]: oslogin_cache_refresh[1950]: Failure getting users, quitting Dec 16 13:13:47.093281 google_oslogin_nss_cache[1950]: oslogin_cache_refresh[1950]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:13:47.093281 google_oslogin_nss_cache[1950]: oslogin_cache_refresh[1950]: Refreshing group entry cache Dec 16 13:13:47.091820 oslogin_cache_refresh[1950]: Failure getting users, quitting Dec 16 13:13:47.091843 oslogin_cache_refresh[1950]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:13:47.091895 oslogin_cache_refresh[1950]: Refreshing group entry cache Dec 16 13:13:47.095953 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 13:13:47.097116 oslogin_cache_refresh[1950]: Failure getting groups, quitting Dec 16 13:13:47.098320 google_oslogin_nss_cache[1950]: oslogin_cache_refresh[1950]: Failure getting groups, quitting Dec 16 13:13:47.098320 google_oslogin_nss_cache[1950]: oslogin_cache_refresh[1950]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:13:47.097131 oslogin_cache_refresh[1950]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:13:47.098837 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 13:13:47.099138 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 13:13:47.099578 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 13:13:47.100404 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 13:13:47.106539 extend-filesystems[1949]: Found /dev/nvme0n1p9 Dec 16 13:13:47.139096 extend-filesystems[1949]: Checking size of /dev/nvme0n1p9 Dec 16 13:13:47.151099 update_engine[1963]: I20251216 13:13:47.148921 1963 main.cc:92] Flatcar Update Engine starting Dec 16 13:13:47.164107 (ntainerd)[1981]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 13:13:47.173615 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 13:13:47.176882 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 13:13:47.196732 jq[1964]: true Dec 16 13:13:47.198805 extend-filesystems[1949]: Resized partition /dev/nvme0n1p9 Dec 16 13:13:47.226176 extend-filesystems[2000]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 13:13:47.234659 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Dec 16 13:13:47.259132 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 13:13:47.259729 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 13:13:47.265194 ntpd[1952]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:13:47.265276 ntpd[1952]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:13:47.265561 ntpd[1952]: 16 Dec 13:13:47 ntpd[1952]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:13:47.265561 ntpd[1952]: 16 Dec 13:13:47 ntpd[1952]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:13:47.265561 ntpd[1952]: 16 Dec 13:13:47 ntpd[1952]: ---------------------------------------------------- Dec 16 13:13:47.265561 ntpd[1952]: 16 Dec 13:13:47 ntpd[1952]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:13:47.265561 ntpd[1952]: 16 Dec 13:13:47 ntpd[1952]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:13:47.265561 ntpd[1952]: 16 Dec 13:13:47 ntpd[1952]: corporation. Support and training for ntp-4 are Dec 16 13:13:47.265561 ntpd[1952]: 16 Dec 13:13:47 ntpd[1952]: available at https://www.nwtime.org/support Dec 16 13:13:47.265561 ntpd[1952]: 16 Dec 13:13:47 ntpd[1952]: ---------------------------------------------------- Dec 16 13:13:47.265288 ntpd[1952]: ---------------------------------------------------- Dec 16 13:13:47.265726 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 16 13:13:47.265299 ntpd[1952]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:13:47.265308 ntpd[1952]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:13:47.265317 ntpd[1952]: corporation. Support and training for ntp-4 are Dec 16 13:13:47.265327 ntpd[1952]: available at https://www.nwtime.org/support Dec 16 13:13:47.265336 ntpd[1952]: ---------------------------------------------------- Dec 16 13:13:47.283724 ntpd[1952]: proto: precision = 0.068 usec (-24) Dec 16 13:13:47.284775 ntpd[1952]: 16 Dec 13:13:47 ntpd[1952]: proto: precision = 0.068 usec (-24) Dec 16 13:13:47.287263 ntpd[1952]: basedate set to 2025-11-30 Dec 16 13:13:47.287294 ntpd[1952]: gps base set to 2025-11-30 (week 2395) Dec 16 13:13:47.287458 ntpd[1952]: 16 Dec 13:13:47 ntpd[1952]: basedate set to 2025-11-30 Dec 16 13:13:47.287458 ntpd[1952]: 16 Dec 13:13:47 ntpd[1952]: gps base set to 2025-11-30 (week 2395) Dec 16 13:13:47.287458 ntpd[1952]: 16 Dec 13:13:47 ntpd[1952]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:13:47.287432 ntpd[1952]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:13:47.287682 ntpd[1952]: 16 Dec 13:13:47 ntpd[1952]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:13:47.287463 ntpd[1952]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:13:47.287875 ntpd[1952]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:13:47.287917 ntpd[1952]: Listen normally on 3 eth0 172.31.28.249:123 Dec 16 13:13:47.287993 ntpd[1952]: 16 Dec 13:13:47 ntpd[1952]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:13:47.287993 ntpd[1952]: 16 Dec 13:13:47 ntpd[1952]: Listen normally on 3 eth0 172.31.28.249:123 Dec 16 13:13:47.287993 ntpd[1952]: 16 Dec 13:13:47 ntpd[1952]: Listen normally on 4 lo [::1]:123 Dec 16 13:13:47.287993 ntpd[1952]: 16 Dec 13:13:47 ntpd[1952]: bind(21) AF_INET6 [fe80::42f:79ff:fe43:9ded%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 13:13:47.287950 ntpd[1952]: Listen normally on 4 lo [::1]:123 Dec 16 13:13:47.288180 ntpd[1952]: 16 Dec 13:13:47 ntpd[1952]: unable to create socket on eth0 (5) for [fe80::42f:79ff:fe43:9ded%2]:123 Dec 16 13:13:47.287981 ntpd[1952]: bind(21) AF_INET6 [fe80::42f:79ff:fe43:9ded%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 13:13:47.288003 ntpd[1952]: unable to create socket on eth0 (5) for [fe80::42f:79ff:fe43:9ded%2]:123 Dec 16 13:13:47.289252 kernel: ntpd[1952]: segfault at 24 ip 00005642ec86faeb sp 00007ffcbbf99d10 error 4 in ntpd[68aeb,5642ec80d000+80000] likely on CPU 0 (core 0, socket 0) Dec 16 13:13:47.292110 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Dec 16 13:13:47.293231 tar[1972]: linux-amd64/LICENSE Dec 16 13:13:47.293731 tar[1972]: linux-amd64/helm Dec 16 13:13:47.304046 dbus-daemon[1946]: [system] SELinux support is enabled Dec 16 13:13:47.304276 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 13:13:47.312568 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 13:13:47.312610 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 13:13:47.314942 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 13:13:47.314971 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 13:13:47.318369 dbus-daemon[1946]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1850 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 16 13:13:47.319367 systemd-logind[1962]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 13:13:47.319404 systemd-logind[1962]: Watching system buttons on /dev/input/event3 (Sleep Button) Dec 16 13:13:47.319427 systemd-logind[1962]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 13:13:47.323392 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 16 13:13:47.327681 update_engine[1963]: I20251216 13:13:47.327201 1963 update_check_scheduler.cc:74] Next update check in 11m49s Dec 16 13:13:47.327787 jq[1999]: true Dec 16 13:13:47.335896 systemd-logind[1962]: New seat seat0. Dec 16 13:13:47.345481 systemd-coredump[2012]: Process 1952 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Dec 16 13:13:47.347238 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 13:13:47.350519 systemd[1]: Started update-engine.service - Update Engine. Dec 16 13:13:47.371297 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Dec 16 13:13:47.377481 systemd[1]: Started systemd-coredump@0-2012-0.service - Process Core Dump (PID 2012/UID 0). Dec 16 13:13:47.404409 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 13:13:47.460167 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 13:13:47.489651 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Dec 16 13:13:47.520620 extend-filesystems[2000]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 16 13:13:47.520620 extend-filesystems[2000]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 16 13:13:47.520620 extend-filesystems[2000]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Dec 16 13:13:47.523763 extend-filesystems[1949]: Resized filesystem in /dev/nvme0n1p9 Dec 16 13:13:47.521605 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 13:13:47.522990 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 13:13:47.563250 coreos-metadata[1945]: Dec 16 13:13:47.563 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 16 13:13:47.565278 bash[2053]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:13:47.567465 coreos-metadata[1945]: Dec 16 13:13:47.567 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 16 13:13:47.568727 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 13:13:47.578298 coreos-metadata[1945]: Dec 16 13:13:47.578 INFO Fetch successful Dec 16 13:13:47.578398 coreos-metadata[1945]: Dec 16 13:13:47.578 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 16 13:13:47.579061 systemd[1]: Starting sshkeys.service... Dec 16 13:13:47.584231 coreos-metadata[1945]: Dec 16 13:13:47.584 INFO Fetch successful Dec 16 13:13:47.584328 coreos-metadata[1945]: Dec 16 13:13:47.584 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 16 13:13:47.586532 coreos-metadata[1945]: Dec 16 13:13:47.586 INFO Fetch successful Dec 16 13:13:47.586645 coreos-metadata[1945]: Dec 16 13:13:47.586 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 16 13:13:47.588733 coreos-metadata[1945]: Dec 16 13:13:47.588 INFO Fetch successful Dec 16 13:13:47.588860 coreos-metadata[1945]: Dec 16 13:13:47.588 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 16 13:13:47.589831 coreos-metadata[1945]: Dec 16 13:13:47.589 INFO Fetch failed with 404: resource not found Dec 16 13:13:47.589952 coreos-metadata[1945]: Dec 16 13:13:47.589 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 16 13:13:47.618277 coreos-metadata[1945]: Dec 16 13:13:47.617 INFO Fetch successful Dec 16 13:13:47.619762 coreos-metadata[1945]: Dec 16 13:13:47.619 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 16 13:13:47.624054 coreos-metadata[1945]: Dec 16 13:13:47.622 INFO Fetch successful Dec 16 13:13:47.624054 coreos-metadata[1945]: Dec 16 13:13:47.622 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 16 13:13:47.626653 coreos-metadata[1945]: Dec 16 13:13:47.625 INFO Fetch successful Dec 16 13:13:47.626653 coreos-metadata[1945]: Dec 16 13:13:47.625 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 16 13:13:47.626843 coreos-metadata[1945]: Dec 16 13:13:47.626 INFO Fetch successful Dec 16 13:13:47.627006 coreos-metadata[1945]: Dec 16 13:13:47.626 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 16 13:13:47.630172 coreos-metadata[1945]: Dec 16 13:13:47.629 INFO Fetch successful Dec 16 13:13:47.662675 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 13:13:47.674863 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 13:13:47.683807 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 16 13:13:47.731006 dbus-daemon[1946]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 16 13:13:47.740720 dbus-daemon[1946]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2010 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 16 13:13:47.752418 systemd[1]: Starting polkit.service - Authorization Manager... Dec 16 13:13:47.764250 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 13:13:47.767370 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 13:13:48.009062 coreos-metadata[2084]: Dec 16 13:13:48.008 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 16 13:13:48.009062 coreos-metadata[2084]: Dec 16 13:13:48.008 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 16 13:13:48.009062 coreos-metadata[2084]: Dec 16 13:13:48.009 INFO Fetch successful Dec 16 13:13:48.009062 coreos-metadata[2084]: Dec 16 13:13:48.009 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 16 13:13:48.009062 coreos-metadata[2084]: Dec 16 13:13:48.009 INFO Fetch successful Dec 16 13:13:48.011115 unknown[2084]: wrote ssh authorized keys file for user: core Dec 16 13:13:48.064096 locksmithd[2016]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 13:13:48.090697 update-ssh-keys[2144]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:13:48.087113 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 13:13:48.098058 systemd[1]: Finished sshkeys.service. Dec 16 13:13:48.125831 containerd[1981]: time="2025-12-16T13:13:48Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 13:13:48.133872 containerd[1981]: time="2025-12-16T13:13:48.132892408Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 13:13:48.196461 systemd-coredump[2015]: Process 1952 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1952: #0 0x00005642ec86faeb n/a (ntpd + 0x68aeb) #1 0x00005642ec818cdf n/a (ntpd + 0x11cdf) #2 0x00005642ec819575 n/a (ntpd + 0x12575) #3 0x00005642ec814d8a n/a (ntpd + 0xdd8a) #4 0x00005642ec8165d3 n/a (ntpd + 0xf5d3) #5 0x00005642ec81efd1 n/a (ntpd + 0x17fd1) #6 0x00005642ec80fc2d n/a (ntpd + 0x8c2d) #7 0x00007f039762316c n/a (libc.so.6 + 0x2716c) #8 0x00007f0397623229 __libc_start_main (libc.so.6 + 0x27229) #9 0x00005642ec80fc55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Dec 16 13:13:48.199401 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Dec 16 13:13:48.199591 systemd[1]: ntpd.service: Failed with result 'core-dump'. Dec 16 13:13:48.206051 systemd[1]: systemd-coredump@0-2012-0.service: Deactivated successfully. Dec 16 13:13:48.212998 containerd[1981]: time="2025-12-16T13:13:48.212113068Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.871µs" Dec 16 13:13:48.212998 containerd[1981]: time="2025-12-16T13:13:48.212156514Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 13:13:48.212998 containerd[1981]: time="2025-12-16T13:13:48.212182325Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 13:13:48.212998 containerd[1981]: time="2025-12-16T13:13:48.212381778Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 13:13:48.212998 containerd[1981]: time="2025-12-16T13:13:48.212405921Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 13:13:48.212998 containerd[1981]: time="2025-12-16T13:13:48.212436480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:13:48.212998 containerd[1981]: time="2025-12-16T13:13:48.212502923Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:13:48.212998 containerd[1981]: time="2025-12-16T13:13:48.212517282Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:13:48.214228 containerd[1981]: time="2025-12-16T13:13:48.213972190Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:13:48.214402 containerd[1981]: time="2025-12-16T13:13:48.214376673Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:13:48.214499 containerd[1981]: time="2025-12-16T13:13:48.214481104Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:13:48.215377 containerd[1981]: time="2025-12-16T13:13:48.214863493Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 13:13:48.215377 containerd[1981]: time="2025-12-16T13:13:48.214991980Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 13:13:48.215377 containerd[1981]: time="2025-12-16T13:13:48.215229712Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:13:48.215377 containerd[1981]: time="2025-12-16T13:13:48.215270594Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:13:48.215377 containerd[1981]: time="2025-12-16T13:13:48.215285265Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 13:13:48.216238 containerd[1981]: time="2025-12-16T13:13:48.216202483Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 13:13:48.217748 containerd[1981]: time="2025-12-16T13:13:48.217530832Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 13:13:48.217748 containerd[1981]: time="2025-12-16T13:13:48.217645105Z" level=info msg="metadata content store policy set" policy=shared Dec 16 13:13:48.223808 containerd[1981]: time="2025-12-16T13:13:48.223744038Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 13:13:48.224975 containerd[1981]: time="2025-12-16T13:13:48.224255261Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 13:13:48.224975 containerd[1981]: time="2025-12-16T13:13:48.224291808Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 13:13:48.224975 containerd[1981]: time="2025-12-16T13:13:48.224309061Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 13:13:48.224975 containerd[1981]: time="2025-12-16T13:13:48.224327458Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 13:13:48.224975 containerd[1981]: time="2025-12-16T13:13:48.224353137Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 13:13:48.224975 containerd[1981]: time="2025-12-16T13:13:48.224369221Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 13:13:48.224975 containerd[1981]: time="2025-12-16T13:13:48.224386197Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 13:13:48.224975 containerd[1981]: time="2025-12-16T13:13:48.224404241Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 13:13:48.224975 containerd[1981]: time="2025-12-16T13:13:48.224419637Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 13:13:48.224975 containerd[1981]: time="2025-12-16T13:13:48.224434548Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 13:13:48.224975 containerd[1981]: time="2025-12-16T13:13:48.224451473Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 13:13:48.224975 containerd[1981]: time="2025-12-16T13:13:48.224599258Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 13:13:48.226125 containerd[1981]: time="2025-12-16T13:13:48.224620413Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 13:13:48.226125 containerd[1981]: time="2025-12-16T13:13:48.225647836Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 13:13:48.226125 containerd[1981]: time="2025-12-16T13:13:48.225674047Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 13:13:48.226125 containerd[1981]: time="2025-12-16T13:13:48.225690731Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 13:13:48.226125 containerd[1981]: time="2025-12-16T13:13:48.225720664Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 13:13:48.226125 containerd[1981]: time="2025-12-16T13:13:48.225738779Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 13:13:48.226125 containerd[1981]: time="2025-12-16T13:13:48.225754983Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 13:13:48.226125 containerd[1981]: time="2025-12-16T13:13:48.225771936Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 13:13:48.226125 containerd[1981]: time="2025-12-16T13:13:48.226059215Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 13:13:48.226125 containerd[1981]: time="2025-12-16T13:13:48.226075491Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 13:13:48.227135 containerd[1981]: time="2025-12-16T13:13:48.226535462Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 13:13:48.227135 containerd[1981]: time="2025-12-16T13:13:48.226558497Z" level=info msg="Start snapshots syncer" Dec 16 13:13:48.227135 containerd[1981]: time="2025-12-16T13:13:48.226596155Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 13:13:48.228128 containerd[1981]: time="2025-12-16T13:13:48.227968784Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 13:13:48.228128 containerd[1981]: time="2025-12-16T13:13:48.228056283Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 13:13:48.228636 containerd[1981]: time="2025-12-16T13:13:48.228516595Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 13:13:48.229654 containerd[1981]: time="2025-12-16T13:13:48.229427609Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 13:13:48.229654 containerd[1981]: time="2025-12-16T13:13:48.229592499Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 13:13:48.229654 containerd[1981]: time="2025-12-16T13:13:48.229614877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 13:13:48.230346 containerd[1981]: time="2025-12-16T13:13:48.229827316Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 13:13:48.230346 containerd[1981]: time="2025-12-16T13:13:48.229859275Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 13:13:48.230346 containerd[1981]: time="2025-12-16T13:13:48.229874888Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 13:13:48.230346 containerd[1981]: time="2025-12-16T13:13:48.230223670Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 13:13:48.230346 containerd[1981]: time="2025-12-16T13:13:48.230274380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 13:13:48.230346 containerd[1981]: time="2025-12-16T13:13:48.230298241Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 13:13:48.230346 containerd[1981]: time="2025-12-16T13:13:48.230313121Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 13:13:48.235880 containerd[1981]: time="2025-12-16T13:13:48.233725252Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:13:48.235880 containerd[1981]: time="2025-12-16T13:13:48.233772880Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:13:48.235880 containerd[1981]: time="2025-12-16T13:13:48.233803441Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:13:48.235880 containerd[1981]: time="2025-12-16T13:13:48.233817799Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:13:48.235880 containerd[1981]: time="2025-12-16T13:13:48.233829553Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 13:13:48.235880 containerd[1981]: time="2025-12-16T13:13:48.233848243Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 13:13:48.235880 containerd[1981]: time="2025-12-16T13:13:48.233888705Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 13:13:48.235880 containerd[1981]: time="2025-12-16T13:13:48.233908881Z" level=info msg="runtime interface created" Dec 16 13:13:48.235880 containerd[1981]: time="2025-12-16T13:13:48.233916604Z" level=info msg="created NRI interface" Dec 16 13:13:48.235880 containerd[1981]: time="2025-12-16T13:13:48.233929249Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 13:13:48.235880 containerd[1981]: time="2025-12-16T13:13:48.233962361Z" level=info msg="Connect containerd service" Dec 16 13:13:48.235880 containerd[1981]: time="2025-12-16T13:13:48.233994413Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 13:13:48.237918 containerd[1981]: time="2025-12-16T13:13:48.236853448Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:13:48.255117 polkitd[2105]: Started polkitd version 126 Dec 16 13:13:48.268263 sshd_keygen[1970]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 13:13:48.276098 polkitd[2105]: Loading rules from directory /etc/polkit-1/rules.d Dec 16 13:13:48.276926 polkitd[2105]: Loading rules from directory /run/polkit-1/rules.d Dec 16 13:13:48.279739 polkitd[2105]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 13:13:48.280192 polkitd[2105]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 16 13:13:48.280228 polkitd[2105]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 13:13:48.280269 polkitd[2105]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 16 13:13:48.286994 polkitd[2105]: Finished loading, compiling and executing 2 rules Dec 16 13:13:48.287454 systemd[1]: Started polkit.service - Authorization Manager. Dec 16 13:13:48.290517 dbus-daemon[1946]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 16 13:13:48.292305 polkitd[2105]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 16 13:13:48.306319 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 13:13:48.309819 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Dec 16 13:13:48.314922 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 13:13:48.318240 systemd[1]: Started ntpd.service - Network Time Service. Dec 16 13:13:48.323666 systemd[1]: Started sshd@0-172.31.28.249:22-139.178.68.195:48344.service - OpenSSH per-connection server daemon (139.178.68.195:48344). Dec 16 13:13:48.361672 systemd-hostnamed[2010]: Hostname set to (transient) Dec 16 13:13:48.364332 systemd-resolved[1854]: System hostname changed to 'ip-172-31-28-249'. Dec 16 13:13:48.377942 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 13:13:48.379844 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 13:13:48.385767 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 13:13:48.390979 ntpd[2181]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:13:48.392022 ntpd[2181]: 16 Dec 13:13:48 ntpd[2181]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:13:48.392022 ntpd[2181]: 16 Dec 13:13:48 ntpd[2181]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:13:48.392022 ntpd[2181]: 16 Dec 13:13:48 ntpd[2181]: ---------------------------------------------------- Dec 16 13:13:48.392022 ntpd[2181]: 16 Dec 13:13:48 ntpd[2181]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:13:48.392022 ntpd[2181]: 16 Dec 13:13:48 ntpd[2181]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:13:48.392022 ntpd[2181]: 16 Dec 13:13:48 ntpd[2181]: corporation. Support and training for ntp-4 are Dec 16 13:13:48.392022 ntpd[2181]: 16 Dec 13:13:48 ntpd[2181]: available at https://www.nwtime.org/support Dec 16 13:13:48.392022 ntpd[2181]: 16 Dec 13:13:48 ntpd[2181]: ---------------------------------------------------- Dec 16 13:13:48.392022 ntpd[2181]: 16 Dec 13:13:48 ntpd[2181]: proto: precision = 0.072 usec (-24) Dec 16 13:13:48.391050 ntpd[2181]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:13:48.403537 kernel: ntpd[2181]: segfault at 24 ip 000055fc0f162aeb sp 00007ffd12ef1a90 error 4 in ntpd[68aeb,55fc0f100000+80000] likely on CPU 1 (core 0, socket 0) Dec 16 13:13:48.403600 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Dec 16 13:13:48.403650 ntpd[2181]: 16 Dec 13:13:48 ntpd[2181]: basedate set to 2025-11-30 Dec 16 13:13:48.403650 ntpd[2181]: 16 Dec 13:13:48 ntpd[2181]: gps base set to 2025-11-30 (week 2395) Dec 16 13:13:48.403650 ntpd[2181]: 16 Dec 13:13:48 ntpd[2181]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:13:48.403650 ntpd[2181]: 16 Dec 13:13:48 ntpd[2181]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:13:48.403650 ntpd[2181]: 16 Dec 13:13:48 ntpd[2181]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:13:48.403650 ntpd[2181]: 16 Dec 13:13:48 ntpd[2181]: Listen normally on 3 eth0 172.31.28.249:123 Dec 16 13:13:48.403650 ntpd[2181]: 16 Dec 13:13:48 ntpd[2181]: Listen normally on 4 lo [::1]:123 Dec 16 13:13:48.403650 ntpd[2181]: 16 Dec 13:13:48 ntpd[2181]: bind(21) AF_INET6 [fe80::42f:79ff:fe43:9ded%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 13:13:48.403650 ntpd[2181]: 16 Dec 13:13:48 ntpd[2181]: unable to create socket on eth0 (5) for [fe80::42f:79ff:fe43:9ded%2]:123 Dec 16 13:13:48.391061 ntpd[2181]: ---------------------------------------------------- Dec 16 13:13:48.391070 ntpd[2181]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:13:48.391079 ntpd[2181]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:13:48.391088 ntpd[2181]: corporation. Support and training for ntp-4 are Dec 16 13:13:48.391097 ntpd[2181]: available at https://www.nwtime.org/support Dec 16 13:13:48.391106 ntpd[2181]: ---------------------------------------------------- Dec 16 13:13:48.391844 ntpd[2181]: proto: precision = 0.072 usec (-24) Dec 16 13:13:48.392112 ntpd[2181]: basedate set to 2025-11-30 Dec 16 13:13:48.392127 ntpd[2181]: gps base set to 2025-11-30 (week 2395) Dec 16 13:13:48.392216 ntpd[2181]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:13:48.392245 ntpd[2181]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:13:48.392447 ntpd[2181]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:13:48.392475 ntpd[2181]: Listen normally on 3 eth0 172.31.28.249:123 Dec 16 13:13:48.392505 ntpd[2181]: Listen normally on 4 lo [::1]:123 Dec 16 13:13:48.392535 ntpd[2181]: bind(21) AF_INET6 [fe80::42f:79ff:fe43:9ded%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 13:13:48.392558 ntpd[2181]: unable to create socket on eth0 (5) for [fe80::42f:79ff:fe43:9ded%2]:123 Dec 16 13:13:48.408856 systemd-coredump[2191]: Process 2181 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Dec 16 13:13:48.417784 systemd[1]: Started systemd-coredump@1-2191-0.service - Process Core Dump (PID 2191/UID 0). Dec 16 13:13:48.448707 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 13:13:48.452814 systemd-networkd[1850]: eth0: Gained IPv6LL Dec 16 13:13:48.455179 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 13:13:48.463066 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 13:13:48.466119 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 13:13:48.470353 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 13:13:48.473600 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 13:13:48.483865 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 16 13:13:48.495556 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:13:48.502073 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 13:13:48.534696 containerd[1981]: time="2025-12-16T13:13:48.531876260Z" level=info msg="Start subscribing containerd event" Dec 16 13:13:48.534696 containerd[1981]: time="2025-12-16T13:13:48.532664753Z" level=info msg="Start recovering state" Dec 16 13:13:48.534696 containerd[1981]: time="2025-12-16T13:13:48.532792456Z" level=info msg="Start event monitor" Dec 16 13:13:48.534696 containerd[1981]: time="2025-12-16T13:13:48.532808629Z" level=info msg="Start cni network conf syncer for default" Dec 16 13:13:48.534696 containerd[1981]: time="2025-12-16T13:13:48.532820369Z" level=info msg="Start streaming server" Dec 16 13:13:48.534696 containerd[1981]: time="2025-12-16T13:13:48.532830510Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 13:13:48.534696 containerd[1981]: time="2025-12-16T13:13:48.532840221Z" level=info msg="runtime interface starting up..." Dec 16 13:13:48.534696 containerd[1981]: time="2025-12-16T13:13:48.532848374Z" level=info msg="starting plugins..." Dec 16 13:13:48.534696 containerd[1981]: time="2025-12-16T13:13:48.532862946Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 13:13:48.534696 containerd[1981]: time="2025-12-16T13:13:48.533157563Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 13:13:48.534696 containerd[1981]: time="2025-12-16T13:13:48.533208678Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 13:13:48.534696 containerd[1981]: time="2025-12-16T13:13:48.533303693Z" level=info msg="containerd successfully booted in 0.408831s" Dec 16 13:13:48.533416 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 13:13:48.617196 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 13:13:48.705915 sshd[2182]: Accepted publickey for core from 139.178.68.195 port 48344 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:13:48.714821 systemd-coredump[2192]: Process 2181 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 2181: #0 0x000055fc0f162aeb n/a (ntpd + 0x68aeb) #1 0x000055fc0f10bcdf n/a (ntpd + 0x11cdf) #2 0x000055fc0f10c575 n/a (ntpd + 0x12575) #3 0x000055fc0f107d8a n/a (ntpd + 0xdd8a) #4 0x000055fc0f1095d3 n/a (ntpd + 0xf5d3) #5 0x000055fc0f111fd1 n/a (ntpd + 0x17fd1) #6 0x000055fc0f102c2d n/a (ntpd + 0x8c2d) #7 0x00007fafa41c216c n/a (libc.so.6 + 0x2716c) #8 0x00007fafa41c2229 __libc_start_main (libc.so.6 + 0x27229) #9 0x000055fc0f102c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Dec 16 13:13:48.718464 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Dec 16 13:13:48.718709 systemd[1]: ntpd.service: Failed with result 'core-dump'. Dec 16 13:13:48.725496 sshd-session[2182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:48.728578 systemd[1]: systemd-coredump@1-2191-0.service: Deactivated successfully. Dec 16 13:13:48.753848 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 13:13:48.756831 amazon-ssm-agent[2203]: Initializing new seelog logger Dec 16 13:13:48.758292 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 13:13:48.764657 amazon-ssm-agent[2203]: New Seelog Logger Creation Complete Dec 16 13:13:48.764657 amazon-ssm-agent[2203]: 2025/12/16 13:13:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:13:48.764657 amazon-ssm-agent[2203]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:13:48.764657 amazon-ssm-agent[2203]: 2025/12/16 13:13:48 processing appconfig overrides Dec 16 13:13:48.769198 amazon-ssm-agent[2203]: 2025/12/16 13:13:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:13:48.772648 amazon-ssm-agent[2203]: 2025-12-16 13:13:48.7690 INFO Proxy environment variables: Dec 16 13:13:48.772648 amazon-ssm-agent[2203]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:13:48.772648 amazon-ssm-agent[2203]: 2025/12/16 13:13:48 processing appconfig overrides Dec 16 13:13:48.775764 amazon-ssm-agent[2203]: 2025/12/16 13:13:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:13:48.775764 amazon-ssm-agent[2203]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:13:48.775893 amazon-ssm-agent[2203]: 2025/12/16 13:13:48 processing appconfig overrides Dec 16 13:13:48.781662 amazon-ssm-agent[2203]: 2025/12/16 13:13:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:13:48.781662 amazon-ssm-agent[2203]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:13:48.781662 amazon-ssm-agent[2203]: 2025/12/16 13:13:48 processing appconfig overrides Dec 16 13:13:48.786713 systemd-logind[1962]: New session 1 of user core. Dec 16 13:13:48.804436 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 13:13:48.813906 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 13:13:48.826561 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 2. Dec 16 13:13:48.834082 systemd[1]: Started ntpd.service - Network Time Service. Dec 16 13:13:48.835582 (systemd)[2232]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 13:13:48.846535 systemd-logind[1962]: New session c1 of user core. Dec 16 13:13:48.874794 amazon-ssm-agent[2203]: 2025-12-16 13:13:48.7691 INFO https_proxy: Dec 16 13:13:48.877645 tar[1972]: linux-amd64/README.md Dec 16 13:13:48.910912 ntpd[2233]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:13:48.912665 ntpd[2233]: 16 Dec 13:13:48 ntpd[2233]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:44:12 UTC 2025 (1): Starting Dec 16 13:13:48.912665 ntpd[2233]: 16 Dec 13:13:48 ntpd[2233]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:13:48.912665 ntpd[2233]: 16 Dec 13:13:48 ntpd[2233]: ---------------------------------------------------- Dec 16 13:13:48.912665 ntpd[2233]: 16 Dec 13:13:48 ntpd[2233]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:13:48.912665 ntpd[2233]: 16 Dec 13:13:48 ntpd[2233]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:13:48.912665 ntpd[2233]: 16 Dec 13:13:48 ntpd[2233]: corporation. Support and training for ntp-4 are Dec 16 13:13:48.912665 ntpd[2233]: 16 Dec 13:13:48 ntpd[2233]: available at https://www.nwtime.org/support Dec 16 13:13:48.912665 ntpd[2233]: 16 Dec 13:13:48 ntpd[2233]: ---------------------------------------------------- Dec 16 13:13:48.912665 ntpd[2233]: 16 Dec 13:13:48 ntpd[2233]: proto: precision = 0.088 usec (-23) Dec 16 13:13:48.912665 ntpd[2233]: 16 Dec 13:13:48 ntpd[2233]: basedate set to 2025-11-30 Dec 16 13:13:48.912665 ntpd[2233]: 16 Dec 13:13:48 ntpd[2233]: gps base set to 2025-11-30 (week 2395) Dec 16 13:13:48.912665 ntpd[2233]: 16 Dec 13:13:48 ntpd[2233]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:13:48.912665 ntpd[2233]: 16 Dec 13:13:48 ntpd[2233]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:13:48.911399 ntpd[2233]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 13:13:48.911411 ntpd[2233]: ---------------------------------------------------- Dec 16 13:13:48.911421 ntpd[2233]: ntp-4 is maintained by Network Time Foundation, Dec 16 13:13:48.911430 ntpd[2233]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 13:13:48.911439 ntpd[2233]: corporation. Support and training for ntp-4 are Dec 16 13:13:48.911448 ntpd[2233]: available at https://www.nwtime.org/support Dec 16 13:13:48.911457 ntpd[2233]: ---------------------------------------------------- Dec 16 13:13:48.912202 ntpd[2233]: proto: precision = 0.088 usec (-23) Dec 16 13:13:48.912476 ntpd[2233]: basedate set to 2025-11-30 Dec 16 13:13:48.912489 ntpd[2233]: gps base set to 2025-11-30 (week 2395) Dec 16 13:13:48.912576 ntpd[2233]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 13:13:48.912603 ntpd[2233]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 13:13:48.918170 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 13:13:48.919876 ntpd[2233]: 16 Dec 13:13:48 ntpd[2233]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:13:48.919876 ntpd[2233]: 16 Dec 13:13:48 ntpd[2233]: Listen normally on 3 eth0 172.31.28.249:123 Dec 16 13:13:48.919876 ntpd[2233]: 16 Dec 13:13:48 ntpd[2233]: Listen normally on 4 lo [::1]:123 Dec 16 13:13:48.919876 ntpd[2233]: 16 Dec 13:13:48 ntpd[2233]: Listen normally on 5 eth0 [fe80::42f:79ff:fe43:9ded%2]:123 Dec 16 13:13:48.919876 ntpd[2233]: 16 Dec 13:13:48 ntpd[2233]: Listening on routing socket on fd #22 for interface updates Dec 16 13:13:48.917090 ntpd[2233]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 13:13:48.917126 ntpd[2233]: Listen normally on 3 eth0 172.31.28.249:123 Dec 16 13:13:48.917160 ntpd[2233]: Listen normally on 4 lo [::1]:123 Dec 16 13:13:48.917191 ntpd[2233]: Listen normally on 5 eth0 [fe80::42f:79ff:fe43:9ded%2]:123 Dec 16 13:13:48.917218 ntpd[2233]: Listening on routing socket on fd #22 for interface updates Dec 16 13:13:48.923118 ntpd[2233]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 13:13:48.923773 ntpd[2233]: 16 Dec 13:13:48 ntpd[2233]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 13:13:48.923773 ntpd[2233]: 16 Dec 13:13:48 ntpd[2233]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 13:13:48.923153 ntpd[2233]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 13:13:48.975140 amazon-ssm-agent[2203]: 2025-12-16 13:13:48.7691 INFO http_proxy: Dec 16 13:13:49.074474 amazon-ssm-agent[2203]: 2025-12-16 13:13:48.7691 INFO no_proxy: Dec 16 13:13:49.136700 systemd[2232]: Queued start job for default target default.target. Dec 16 13:13:49.145574 systemd[2232]: Created slice app.slice - User Application Slice. Dec 16 13:13:49.146119 systemd[2232]: Reached target paths.target - Paths. Dec 16 13:13:49.146291 systemd[2232]: Reached target timers.target - Timers. Dec 16 13:13:49.149710 systemd[2232]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 13:13:49.173402 systemd[2232]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 13:13:49.173783 amazon-ssm-agent[2203]: 2025-12-16 13:13:48.7715 INFO Checking if agent identity type OnPrem can be assumed Dec 16 13:13:49.174744 systemd[2232]: Reached target sockets.target - Sockets. Dec 16 13:13:49.174814 systemd[2232]: Reached target basic.target - Basic System. Dec 16 13:13:49.174865 systemd[2232]: Reached target default.target - Main User Target. Dec 16 13:13:49.174905 systemd[2232]: Startup finished in 303ms. Dec 16 13:13:49.175075 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 13:13:49.183444 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 13:13:49.273179 amazon-ssm-agent[2203]: 2025-12-16 13:13:48.7724 INFO Checking if agent identity type EC2 can be assumed Dec 16 13:13:49.339759 systemd[1]: Started sshd@1-172.31.28.249:22-139.178.68.195:48352.service - OpenSSH per-connection server daemon (139.178.68.195:48352). Dec 16 13:13:49.350931 amazon-ssm-agent[2203]: 2025/12/16 13:13:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:13:49.351118 amazon-ssm-agent[2203]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 13:13:49.351343 amazon-ssm-agent[2203]: 2025/12/16 13:13:49 processing appconfig overrides Dec 16 13:13:49.371899 amazon-ssm-agent[2203]: 2025-12-16 13:13:48.9284 INFO Agent will take identity from EC2 Dec 16 13:13:49.390508 amazon-ssm-agent[2203]: 2025-12-16 13:13:48.9303 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Dec 16 13:13:49.390705 amazon-ssm-agent[2203]: 2025-12-16 13:13:48.9303 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Dec 16 13:13:49.390705 amazon-ssm-agent[2203]: 2025-12-16 13:13:48.9303 INFO [amazon-ssm-agent] Starting Core Agent Dec 16 13:13:49.390705 amazon-ssm-agent[2203]: 2025-12-16 13:13:48.9303 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Dec 16 13:13:49.390705 amazon-ssm-agent[2203]: 2025-12-16 13:13:48.9304 INFO [Registrar] Starting registrar module Dec 16 13:13:49.390705 amazon-ssm-agent[2203]: 2025-12-16 13:13:48.9329 INFO [EC2Identity] Checking disk for registration info Dec 16 13:13:49.390705 amazon-ssm-agent[2203]: 2025-12-16 13:13:48.9329 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Dec 16 13:13:49.390889 amazon-ssm-agent[2203]: 2025-12-16 13:13:48.9330 INFO [EC2Identity] Generating registration keypair Dec 16 13:13:49.390889 amazon-ssm-agent[2203]: 2025-12-16 13:13:49.3003 INFO [EC2Identity] Checking write access before registering Dec 16 13:13:49.390889 amazon-ssm-agent[2203]: 2025-12-16 13:13:49.3023 INFO [EC2Identity] Registering EC2 instance with Systems Manager Dec 16 13:13:49.390889 amazon-ssm-agent[2203]: 2025-12-16 13:13:49.3506 INFO [EC2Identity] EC2 registration was successful. Dec 16 13:13:49.390889 amazon-ssm-agent[2203]: 2025-12-16 13:13:49.3507 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Dec 16 13:13:49.390889 amazon-ssm-agent[2203]: 2025-12-16 13:13:49.3508 INFO [CredentialRefresher] credentialRefresher has started Dec 16 13:13:49.390889 amazon-ssm-agent[2203]: 2025-12-16 13:13:49.3508 INFO [CredentialRefresher] Starting credentials refresher loop Dec 16 13:13:49.390889 amazon-ssm-agent[2203]: 2025-12-16 13:13:49.3902 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 16 13:13:49.390889 amazon-ssm-agent[2203]: 2025-12-16 13:13:49.3904 INFO [CredentialRefresher] Credentials ready Dec 16 13:13:49.470541 amazon-ssm-agent[2203]: 2025-12-16 13:13:49.3908 INFO [CredentialRefresher] Next credential rotation will be in 29.9999904709 minutes Dec 16 13:13:49.521704 sshd[2252]: Accepted publickey for core from 139.178.68.195 port 48352 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:13:49.522986 sshd-session[2252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:49.528916 systemd-logind[1962]: New session 2 of user core. Dec 16 13:13:49.533846 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 13:13:49.658077 sshd[2255]: Connection closed by 139.178.68.195 port 48352 Dec 16 13:13:49.659043 sshd-session[2252]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:49.662588 systemd[1]: sshd@1-172.31.28.249:22-139.178.68.195:48352.service: Deactivated successfully. Dec 16 13:13:49.664674 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 13:13:49.666393 systemd-logind[1962]: Session 2 logged out. Waiting for processes to exit. Dec 16 13:13:49.667976 systemd-logind[1962]: Removed session 2. Dec 16 13:13:49.688916 systemd[1]: Started sshd@2-172.31.28.249:22-139.178.68.195:48358.service - OpenSSH per-connection server daemon (139.178.68.195:48358). Dec 16 13:13:49.851045 sshd[2261]: Accepted publickey for core from 139.178.68.195 port 48358 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:13:49.851878 sshd-session[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:13:49.857148 systemd-logind[1962]: New session 3 of user core. Dec 16 13:13:49.861821 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 13:13:49.984651 sshd[2264]: Connection closed by 139.178.68.195 port 48358 Dec 16 13:13:49.985307 sshd-session[2261]: pam_unix(sshd:session): session closed for user core Dec 16 13:13:49.989993 systemd[1]: sshd@2-172.31.28.249:22-139.178.68.195:48358.service: Deactivated successfully. Dec 16 13:13:49.992022 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 13:13:49.993728 systemd-logind[1962]: Session 3 logged out. Waiting for processes to exit. Dec 16 13:13:49.995594 systemd-logind[1962]: Removed session 3. Dec 16 13:13:50.403420 amazon-ssm-agent[2203]: 2025-12-16 13:13:50.4030 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 16 13:13:50.505090 amazon-ssm-agent[2203]: 2025-12-16 13:13:50.4050 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2271) started Dec 16 13:13:50.605677 amazon-ssm-agent[2203]: 2025-12-16 13:13:50.4050 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 16 13:13:50.823072 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:13:50.824446 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 13:13:50.826970 systemd[1]: Startup finished in 2.626s (kernel) + 5.824s (initrd) + 7.425s (userspace) = 15.876s. Dec 16 13:13:50.831597 (kubelet)[2287]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:13:51.761548 kubelet[2287]: E1216 13:13:51.761465 2287 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:13:51.764255 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:13:51.764681 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:13:51.765241 systemd[1]: kubelet.service: Consumed 1.000s CPU time, 258.2M memory peak. Dec 16 13:13:56.333020 systemd-resolved[1854]: Clock change detected. Flushing caches. Dec 16 13:14:00.440522 systemd[1]: Started sshd@3-172.31.28.249:22-139.178.68.195:59252.service - OpenSSH per-connection server daemon (139.178.68.195:59252). Dec 16 13:14:00.611130 sshd[2299]: Accepted publickey for core from 139.178.68.195 port 59252 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:14:00.612514 sshd-session[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:00.618892 systemd-logind[1962]: New session 4 of user core. Dec 16 13:14:00.624822 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 13:14:00.747179 sshd[2302]: Connection closed by 139.178.68.195 port 59252 Dec 16 13:14:00.748147 sshd-session[2299]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:00.752984 systemd[1]: sshd@3-172.31.28.249:22-139.178.68.195:59252.service: Deactivated successfully. Dec 16 13:14:00.755165 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 13:14:00.756289 systemd-logind[1962]: Session 4 logged out. Waiting for processes to exit. Dec 16 13:14:00.758100 systemd-logind[1962]: Removed session 4. Dec 16 13:14:00.780204 systemd[1]: Started sshd@4-172.31.28.249:22-139.178.68.195:59264.service - OpenSSH per-connection server daemon (139.178.68.195:59264). Dec 16 13:14:00.964109 sshd[2308]: Accepted publickey for core from 139.178.68.195 port 59264 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:14:00.965411 sshd-session[2308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:00.971210 systemd-logind[1962]: New session 5 of user core. Dec 16 13:14:00.977795 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 13:14:01.097928 sshd[2311]: Connection closed by 139.178.68.195 port 59264 Dec 16 13:14:01.098630 sshd-session[2308]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:01.114748 systemd[1]: sshd@4-172.31.28.249:22-139.178.68.195:59264.service: Deactivated successfully. Dec 16 13:14:01.123086 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 13:14:01.125897 systemd-logind[1962]: Session 5 logged out. Waiting for processes to exit. Dec 16 13:14:01.166387 systemd[1]: Started sshd@5-172.31.28.249:22-139.178.68.195:59276.service - OpenSSH per-connection server daemon (139.178.68.195:59276). Dec 16 13:14:01.170134 systemd-logind[1962]: Removed session 5. Dec 16 13:14:01.440700 sshd[2317]: Accepted publickey for core from 139.178.68.195 port 59276 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:14:01.444375 sshd-session[2317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:01.466795 systemd-logind[1962]: New session 6 of user core. Dec 16 13:14:01.487452 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 13:14:01.676424 sshd[2320]: Connection closed by 139.178.68.195 port 59276 Dec 16 13:14:01.677126 sshd-session[2317]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:01.690276 systemd[1]: sshd@5-172.31.28.249:22-139.178.68.195:59276.service: Deactivated successfully. Dec 16 13:14:01.697730 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 13:14:01.700730 systemd-logind[1962]: Session 6 logged out. Waiting for processes to exit. Dec 16 13:14:01.742150 systemd[1]: Started sshd@6-172.31.28.249:22-139.178.68.195:59292.service - OpenSSH per-connection server daemon (139.178.68.195:59292). Dec 16 13:14:01.749066 systemd-logind[1962]: Removed session 6. Dec 16 13:14:01.985349 sshd[2326]: Accepted publickey for core from 139.178.68.195 port 59292 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:14:01.988264 sshd-session[2326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:02.020462 systemd-logind[1962]: New session 7 of user core. Dec 16 13:14:02.029798 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 13:14:02.247791 sudo[2330]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 13:14:02.248277 sudo[2330]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:14:02.253458 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 13:14:02.263827 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:14:02.308940 sudo[2330]: pam_unix(sudo:session): session closed for user root Dec 16 13:14:02.332508 sshd[2329]: Connection closed by 139.178.68.195 port 59292 Dec 16 13:14:02.334918 sshd-session[2326]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:02.382884 systemd[1]: sshd@6-172.31.28.249:22-139.178.68.195:59292.service: Deactivated successfully. Dec 16 13:14:02.394697 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 13:14:02.408890 systemd-logind[1962]: Session 7 logged out. Waiting for processes to exit. Dec 16 13:14:02.412139 systemd[1]: Started sshd@7-172.31.28.249:22-139.178.68.195:59308.service - OpenSSH per-connection server daemon (139.178.68.195:59308). Dec 16 13:14:02.419254 systemd-logind[1962]: Removed session 7. Dec 16 13:14:02.643281 sshd[2339]: Accepted publickey for core from 139.178.68.195 port 59308 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:14:02.660469 sshd-session[2339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:02.720066 systemd-logind[1962]: New session 8 of user core. Dec 16 13:14:02.729765 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 13:14:02.858546 sudo[2348]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 13:14:02.859082 sudo[2348]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:14:02.869492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:14:02.878071 sudo[2348]: pam_unix(sudo:session): session closed for user root Dec 16 13:14:02.886157 (kubelet)[2351]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:14:02.900224 sudo[2347]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 13:14:02.901404 sudo[2347]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:14:02.934857 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:14:03.044698 augenrules[2378]: No rules Dec 16 13:14:03.054201 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:14:03.054627 kubelet[2351]: E1216 13:14:03.054361 2351 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:14:03.055023 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:14:03.057795 sudo[2347]: pam_unix(sudo:session): session closed for user root Dec 16 13:14:03.060350 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:14:03.060540 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:14:03.062048 systemd[1]: kubelet.service: Consumed 223ms CPU time, 110.4M memory peak. Dec 16 13:14:03.081633 sshd[2342]: Connection closed by 139.178.68.195 port 59308 Dec 16 13:14:03.082350 sshd-session[2339]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:03.096071 systemd[1]: sshd@7-172.31.28.249:22-139.178.68.195:59308.service: Deactivated successfully. Dec 16 13:14:03.098271 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 13:14:03.102783 systemd-logind[1962]: Session 8 logged out. Waiting for processes to exit. Dec 16 13:14:03.142274 systemd-logind[1962]: Removed session 8. Dec 16 13:14:03.146839 systemd[1]: Started sshd@8-172.31.28.249:22-139.178.68.195:59320.service - OpenSSH per-connection server daemon (139.178.68.195:59320). Dec 16 13:14:03.353776 sshd[2388]: Accepted publickey for core from 139.178.68.195 port 59320 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:14:03.355964 sshd-session[2388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:14:03.366161 systemd-logind[1962]: New session 9 of user core. Dec 16 13:14:03.372632 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 13:14:03.487124 sudo[2392]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 13:14:03.487720 sudo[2392]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:14:04.114141 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 13:14:04.125109 (dockerd)[2410]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 13:14:04.567671 dockerd[2410]: time="2025-12-16T13:14:04.567502917Z" level=info msg="Starting up" Dec 16 13:14:04.577706 dockerd[2410]: time="2025-12-16T13:14:04.577493044Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 13:14:04.596990 dockerd[2410]: time="2025-12-16T13:14:04.596936253Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 13:14:04.675395 dockerd[2410]: time="2025-12-16T13:14:04.675194761Z" level=info msg="Loading containers: start." Dec 16 13:14:04.696585 kernel: Initializing XFRM netlink socket Dec 16 13:14:04.992541 (udev-worker)[2432]: Network interface NamePolicy= disabled on kernel command line. Dec 16 13:14:05.041521 systemd-networkd[1850]: docker0: Link UP Dec 16 13:14:05.052716 dockerd[2410]: time="2025-12-16T13:14:05.052650629Z" level=info msg="Loading containers: done." Dec 16 13:14:05.077263 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2351258131-merged.mount: Deactivated successfully. Dec 16 13:14:05.087918 dockerd[2410]: time="2025-12-16T13:14:05.087811689Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 13:14:05.088283 dockerd[2410]: time="2025-12-16T13:14:05.087949536Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 13:14:05.088283 dockerd[2410]: time="2025-12-16T13:14:05.088266906Z" level=info msg="Initializing buildkit" Dec 16 13:14:05.138296 dockerd[2410]: time="2025-12-16T13:14:05.138244339Z" level=info msg="Completed buildkit initialization" Dec 16 13:14:05.148940 dockerd[2410]: time="2025-12-16T13:14:05.148873850Z" level=info msg="Daemon has completed initialization" Dec 16 13:14:05.148940 dockerd[2410]: time="2025-12-16T13:14:05.148974267Z" level=info msg="API listen on /run/docker.sock" Dec 16 13:14:05.149179 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 13:14:06.419280 containerd[1981]: time="2025-12-16T13:14:06.419237571Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 16 13:14:07.030347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount371430336.mount: Deactivated successfully. Dec 16 13:14:08.367296 containerd[1981]: time="2025-12-16T13:14:08.367242924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:08.368541 containerd[1981]: time="2025-12-16T13:14:08.368372009Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068073" Dec 16 13:14:08.369849 containerd[1981]: time="2025-12-16T13:14:08.369809674Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:08.373354 containerd[1981]: time="2025-12-16T13:14:08.373316072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:08.374578 containerd[1981]: time="2025-12-16T13:14:08.374277184Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 1.955002875s" Dec 16 13:14:08.374578 containerd[1981]: time="2025-12-16T13:14:08.374312048Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Dec 16 13:14:08.374894 containerd[1981]: time="2025-12-16T13:14:08.374841655Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 16 13:14:09.737352 containerd[1981]: time="2025-12-16T13:14:09.737304927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:09.743033 containerd[1981]: time="2025-12-16T13:14:09.741991199Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162440" Dec 16 13:14:09.745171 containerd[1981]: time="2025-12-16T13:14:09.745112102Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:09.750651 containerd[1981]: time="2025-12-16T13:14:09.750604252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:09.752447 containerd[1981]: time="2025-12-16T13:14:09.751601752Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.376725452s" Dec 16 13:14:09.752447 containerd[1981]: time="2025-12-16T13:14:09.751634079Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Dec 16 13:14:09.752585 containerd[1981]: time="2025-12-16T13:14:09.752549668Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 16 13:14:10.856721 containerd[1981]: time="2025-12-16T13:14:10.856667594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:10.857952 containerd[1981]: time="2025-12-16T13:14:10.857898307Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725927" Dec 16 13:14:10.860153 containerd[1981]: time="2025-12-16T13:14:10.860097614Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:10.864491 containerd[1981]: time="2025-12-16T13:14:10.863639218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:10.864491 containerd[1981]: time="2025-12-16T13:14:10.864371554Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.111785731s" Dec 16 13:14:10.864491 containerd[1981]: time="2025-12-16T13:14:10.864399799Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Dec 16 13:14:10.865069 containerd[1981]: time="2025-12-16T13:14:10.864986132Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 16 13:14:12.014769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2655054375.mount: Deactivated successfully. Dec 16 13:14:12.454898 containerd[1981]: time="2025-12-16T13:14:12.454825684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:12.457221 containerd[1981]: time="2025-12-16T13:14:12.456978234Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Dec 16 13:14:12.459368 containerd[1981]: time="2025-12-16T13:14:12.459328516Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:12.464866 containerd[1981]: time="2025-12-16T13:14:12.464802301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:12.466487 containerd[1981]: time="2025-12-16T13:14:12.466429213Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.601288025s" Dec 16 13:14:12.466487 containerd[1981]: time="2025-12-16T13:14:12.466481959Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Dec 16 13:14:12.467432 containerd[1981]: time="2025-12-16T13:14:12.467408449Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 16 13:14:13.030086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3986493485.mount: Deactivated successfully. Dec 16 13:14:13.093333 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 13:14:13.095006 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:14:13.380486 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:14:13.391023 (kubelet)[2718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:14:13.461198 kubelet[2718]: E1216 13:14:13.461137 2718 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:14:13.464239 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:14:13.464440 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:14:13.465832 systemd[1]: kubelet.service: Consumed 208ms CPU time, 109.6M memory peak. Dec 16 13:14:14.263025 containerd[1981]: time="2025-12-16T13:14:14.262955858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:14.265040 containerd[1981]: time="2025-12-16T13:14:14.265000383Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Dec 16 13:14:14.267529 containerd[1981]: time="2025-12-16T13:14:14.267469711Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:14.271611 containerd[1981]: time="2025-12-16T13:14:14.271553208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:14.272550 containerd[1981]: time="2025-12-16T13:14:14.272517757Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.804965272s" Dec 16 13:14:14.272694 containerd[1981]: time="2025-12-16T13:14:14.272679740Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Dec 16 13:14:14.273116 containerd[1981]: time="2025-12-16T13:14:14.273088758Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 16 13:14:14.770939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2578771951.mount: Deactivated successfully. Dec 16 13:14:14.783123 containerd[1981]: time="2025-12-16T13:14:14.783072925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:14.784950 containerd[1981]: time="2025-12-16T13:14:14.784893621Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Dec 16 13:14:14.787429 containerd[1981]: time="2025-12-16T13:14:14.787324453Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:14.790729 containerd[1981]: time="2025-12-16T13:14:14.790650990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:14.791303 containerd[1981]: time="2025-12-16T13:14:14.791276802Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 518.007507ms" Dec 16 13:14:14.791410 containerd[1981]: time="2025-12-16T13:14:14.791398426Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Dec 16 13:14:14.791918 containerd[1981]: time="2025-12-16T13:14:14.791889077Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 16 13:14:15.301896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3758392096.mount: Deactivated successfully. Dec 16 13:14:18.221693 containerd[1981]: time="2025-12-16T13:14:18.221633304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:18.223492 containerd[1981]: time="2025-12-16T13:14:18.223440614Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166814" Dec 16 13:14:18.226135 containerd[1981]: time="2025-12-16T13:14:18.226075089Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:18.231141 containerd[1981]: time="2025-12-16T13:14:18.230099375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:18.231141 containerd[1981]: time="2025-12-16T13:14:18.230905287Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.438987952s" Dec 16 13:14:18.231141 containerd[1981]: time="2025-12-16T13:14:18.230934531Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Dec 16 13:14:18.811899 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 16 13:14:22.314372 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:14:22.314649 systemd[1]: kubelet.service: Consumed 208ms CPU time, 109.6M memory peak. Dec 16 13:14:22.317746 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:14:22.353043 systemd[1]: Reload requested from client PID 2852 ('systemctl') (unit session-9.scope)... Dec 16 13:14:22.353069 systemd[1]: Reloading... Dec 16 13:14:22.498612 zram_generator::config[2902]: No configuration found. Dec 16 13:14:22.781338 systemd[1]: Reloading finished in 427 ms. Dec 16 13:14:22.846304 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 13:14:22.846586 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 13:14:22.846982 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:14:22.847031 systemd[1]: kubelet.service: Consumed 148ms CPU time, 98M memory peak. Dec 16 13:14:22.849184 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:14:23.366301 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:14:23.379006 (kubelet)[2959]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:14:23.479628 kubelet[2959]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:14:23.479628 kubelet[2959]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:14:23.479628 kubelet[2959]: I1216 13:14:23.479480 2959 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:14:23.701812 kubelet[2959]: I1216 13:14:23.701269 2959 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 13:14:23.701812 kubelet[2959]: I1216 13:14:23.701302 2959 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:14:23.701812 kubelet[2959]: I1216 13:14:23.701332 2959 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 13:14:23.701812 kubelet[2959]: I1216 13:14:23.701343 2959 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:14:23.701812 kubelet[2959]: I1216 13:14:23.701701 2959 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:14:23.726351 kubelet[2959]: I1216 13:14:23.725940 2959 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:14:23.727664 kubelet[2959]: E1216 13:14:23.727613 2959 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.28.249:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.249:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 13:14:23.742174 kubelet[2959]: I1216 13:14:23.742141 2959 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:14:23.746678 kubelet[2959]: I1216 13:14:23.746647 2959 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 13:14:23.752054 kubelet[2959]: I1216 13:14:23.751964 2959 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:14:23.754056 kubelet[2959]: I1216 13:14:23.752029 2959 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-249","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:14:23.754056 kubelet[2959]: I1216 13:14:23.754055 2959 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:14:23.754056 kubelet[2959]: I1216 13:14:23.754070 2959 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 13:14:23.754292 kubelet[2959]: I1216 13:14:23.754184 2959 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 13:14:23.759640 kubelet[2959]: I1216 13:14:23.759606 2959 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:14:23.760712 kubelet[2959]: I1216 13:14:23.760676 2959 kubelet.go:475] "Attempting to sync node with API server" Dec 16 13:14:23.760712 kubelet[2959]: I1216 13:14:23.760700 2959 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:14:23.760835 kubelet[2959]: I1216 13:14:23.760733 2959 kubelet.go:387] "Adding apiserver pod source" Dec 16 13:14:23.763322 kubelet[2959]: I1216 13:14:23.762944 2959 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:14:23.769359 kubelet[2959]: E1216 13:14:23.769315 2959 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.249:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-249&limit=500&resourceVersion=0\": dial tcp 172.31.28.249:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:14:23.772198 kubelet[2959]: E1216 13:14:23.772168 2959 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.249:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.249:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:14:23.774256 kubelet[2959]: I1216 13:14:23.774217 2959 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:14:23.779862 kubelet[2959]: I1216 13:14:23.779810 2959 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:14:23.779862 kubelet[2959]: I1216 13:14:23.779847 2959 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 13:14:23.783578 kubelet[2959]: W1216 13:14:23.783502 2959 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 13:14:23.786241 kubelet[2959]: I1216 13:14:23.785911 2959 server.go:1262] "Started kubelet" Dec 16 13:14:23.787431 kubelet[2959]: I1216 13:14:23.786861 2959 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:14:23.792201 kubelet[2959]: E1216 13:14:23.790644 2959 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.249:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.249:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-249.1881b46270109494 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-249,UID:ip-172-31-28-249,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-249,},FirstTimestamp:2025-12-16 13:14:23.78587458 +0000 UTC m=+0.403188792,LastTimestamp:2025-12-16 13:14:23.78587458 +0000 UTC m=+0.403188792,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-249,}" Dec 16 13:14:23.792895 kubelet[2959]: I1216 13:14:23.792872 2959 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:14:23.799477 kubelet[2959]: I1216 13:14:23.796593 2959 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 13:14:23.799477 kubelet[2959]: E1216 13:14:23.798498 2959 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-28-249\" not found" Dec 16 13:14:23.804329 kubelet[2959]: I1216 13:14:23.804245 2959 server.go:310] "Adding debug handlers to kubelet server" Dec 16 13:14:23.809633 kubelet[2959]: I1216 13:14:23.809608 2959 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 13:14:23.809816 kubelet[2959]: I1216 13:14:23.809807 2959 reconciler.go:29] "Reconciler: start to sync state" Dec 16 13:14:23.811092 kubelet[2959]: I1216 13:14:23.810894 2959 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:14:23.811092 kubelet[2959]: I1216 13:14:23.810972 2959 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 13:14:23.811249 kubelet[2959]: I1216 13:14:23.811231 2959 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:14:23.812593 kubelet[2959]: I1216 13:14:23.812570 2959 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:14:23.814936 kubelet[2959]: E1216 13:14:23.814812 2959 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.249:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-249?timeout=10s\": dial tcp 172.31.28.249:6443: connect: connection refused" interval="200ms" Dec 16 13:14:23.817103 kubelet[2959]: I1216 13:14:23.817056 2959 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:14:23.818668 kubelet[2959]: I1216 13:14:23.818643 2959 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:14:23.820602 kubelet[2959]: E1216 13:14:23.820578 2959 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.249:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.249:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:14:23.822641 kubelet[2959]: I1216 13:14:23.822620 2959 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 13:14:23.823650 kubelet[2959]: E1216 13:14:23.822906 2959 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:14:23.823650 kubelet[2959]: I1216 13:14:23.823005 2959 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:14:23.825506 kubelet[2959]: I1216 13:14:23.825459 2959 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 13:14:23.825506 kubelet[2959]: I1216 13:14:23.825480 2959 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 13:14:23.825506 kubelet[2959]: I1216 13:14:23.825506 2959 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 13:14:23.825630 kubelet[2959]: E1216 13:14:23.825543 2959 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:14:23.832735 kubelet[2959]: E1216 13:14:23.832710 2959 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.249:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.249:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:14:23.852977 kubelet[2959]: I1216 13:14:23.852944 2959 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:14:23.852977 kubelet[2959]: I1216 13:14:23.852967 2959 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:14:23.852977 kubelet[2959]: I1216 13:14:23.852986 2959 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:14:23.857720 kubelet[2959]: I1216 13:14:23.857683 2959 policy_none.go:49] "None policy: Start" Dec 16 13:14:23.857720 kubelet[2959]: I1216 13:14:23.857708 2959 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 13:14:23.857720 kubelet[2959]: I1216 13:14:23.857723 2959 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 13:14:23.861651 kubelet[2959]: I1216 13:14:23.861618 2959 policy_none.go:47] "Start" Dec 16 13:14:23.866574 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 13:14:23.880755 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 13:14:23.885092 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 13:14:23.897924 kubelet[2959]: E1216 13:14:23.897726 2959 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:14:23.897924 kubelet[2959]: I1216 13:14:23.897928 2959 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:14:23.898078 kubelet[2959]: I1216 13:14:23.897938 2959 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:14:23.898426 kubelet[2959]: I1216 13:14:23.898333 2959 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:14:23.899743 kubelet[2959]: E1216 13:14:23.899720 2959 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:14:23.899814 kubelet[2959]: E1216 13:14:23.899771 2959 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-249\" not found" Dec 16 13:14:23.945406 systemd[1]: Created slice kubepods-burstable-pod7fae3f8311cee117f64b5e6b47c4667c.slice - libcontainer container kubepods-burstable-pod7fae3f8311cee117f64b5e6b47c4667c.slice. Dec 16 13:14:23.956155 kubelet[2959]: E1216 13:14:23.955799 2959 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-249\" not found" node="ip-172-31-28-249" Dec 16 13:14:23.957840 systemd[1]: Created slice kubepods-burstable-pod1cee21d32483c57d7d364b90c85b72af.slice - libcontainer container kubepods-burstable-pod1cee21d32483c57d7d364b90c85b72af.slice. Dec 16 13:14:23.967345 kubelet[2959]: E1216 13:14:23.964337 2959 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-249\" not found" node="ip-172-31-28-249" Dec 16 13:14:23.969788 systemd[1]: Created slice kubepods-burstable-pod400593ed3de343d463aaa2d9a6c933db.slice - libcontainer container kubepods-burstable-pod400593ed3de343d463aaa2d9a6c933db.slice. Dec 16 13:14:23.973108 kubelet[2959]: E1216 13:14:23.973077 2959 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-249\" not found" node="ip-172-31-28-249" Dec 16 13:14:24.000227 kubelet[2959]: I1216 13:14:24.000101 2959 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-249" Dec 16 13:14:24.000486 kubelet[2959]: E1216 13:14:24.000425 2959 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.249:6443/api/v1/nodes\": dial tcp 172.31.28.249:6443: connect: connection refused" node="ip-172-31-28-249" Dec 16 13:14:24.010235 kubelet[2959]: I1216 13:14:24.010088 2959 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7fae3f8311cee117f64b5e6b47c4667c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-249\" (UID: \"7fae3f8311cee117f64b5e6b47c4667c\") " pod="kube-system/kube-apiserver-ip-172-31-28-249" Dec 16 13:14:24.010235 kubelet[2959]: I1216 13:14:24.010126 2959 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1cee21d32483c57d7d364b90c85b72af-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-249\" (UID: \"1cee21d32483c57d7d364b90c85b72af\") " pod="kube-system/kube-controller-manager-ip-172-31-28-249" Dec 16 13:14:24.010235 kubelet[2959]: I1216 13:14:24.010148 2959 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1cee21d32483c57d7d364b90c85b72af-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-249\" (UID: \"1cee21d32483c57d7d364b90c85b72af\") " pod="kube-system/kube-controller-manager-ip-172-31-28-249" Dec 16 13:14:24.010442 kubelet[2959]: I1216 13:14:24.010423 2959 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/400593ed3de343d463aaa2d9a6c933db-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-249\" (UID: \"400593ed3de343d463aaa2d9a6c933db\") " pod="kube-system/kube-scheduler-ip-172-31-28-249" Dec 16 13:14:24.010474 kubelet[2959]: I1216 13:14:24.010459 2959 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7fae3f8311cee117f64b5e6b47c4667c-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-249\" (UID: \"7fae3f8311cee117f64b5e6b47c4667c\") " pod="kube-system/kube-apiserver-ip-172-31-28-249" Dec 16 13:14:24.010536 kubelet[2959]: I1216 13:14:24.010520 2959 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1cee21d32483c57d7d364b90c85b72af-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-249\" (UID: \"1cee21d32483c57d7d364b90c85b72af\") " pod="kube-system/kube-controller-manager-ip-172-31-28-249" Dec 16 13:14:24.010590 kubelet[2959]: I1216 13:14:24.010540 2959 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1cee21d32483c57d7d364b90c85b72af-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-249\" (UID: \"1cee21d32483c57d7d364b90c85b72af\") " pod="kube-system/kube-controller-manager-ip-172-31-28-249" Dec 16 13:14:24.010624 kubelet[2959]: I1216 13:14:24.010589 2959 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1cee21d32483c57d7d364b90c85b72af-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-249\" (UID: \"1cee21d32483c57d7d364b90c85b72af\") " pod="kube-system/kube-controller-manager-ip-172-31-28-249" Dec 16 13:14:24.010624 kubelet[2959]: I1216 13:14:24.010606 2959 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7fae3f8311cee117f64b5e6b47c4667c-ca-certs\") pod \"kube-apiserver-ip-172-31-28-249\" (UID: \"7fae3f8311cee117f64b5e6b47c4667c\") " pod="kube-system/kube-apiserver-ip-172-31-28-249" Dec 16 13:14:24.016965 kubelet[2959]: E1216 13:14:24.016906 2959 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.249:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-249?timeout=10s\": dial tcp 172.31.28.249:6443: connect: connection refused" interval="400ms" Dec 16 13:14:24.202497 kubelet[2959]: I1216 13:14:24.202459 2959 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-249" Dec 16 13:14:24.202791 kubelet[2959]: E1216 13:14:24.202766 2959 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.249:6443/api/v1/nodes\": dial tcp 172.31.28.249:6443: connect: connection refused" node="ip-172-31-28-249" Dec 16 13:14:24.263077 containerd[1981]: time="2025-12-16T13:14:24.262950925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-249,Uid:7fae3f8311cee117f64b5e6b47c4667c,Namespace:kube-system,Attempt:0,}" Dec 16 13:14:24.269342 containerd[1981]: time="2025-12-16T13:14:24.269292518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-249,Uid:1cee21d32483c57d7d364b90c85b72af,Namespace:kube-system,Attempt:0,}" Dec 16 13:14:24.278163 containerd[1981]: time="2025-12-16T13:14:24.278121945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-249,Uid:400593ed3de343d463aaa2d9a6c933db,Namespace:kube-system,Attempt:0,}" Dec 16 13:14:24.417965 kubelet[2959]: E1216 13:14:24.417905 2959 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.249:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-249?timeout=10s\": dial tcp 172.31.28.249:6443: connect: connection refused" interval="800ms" Dec 16 13:14:24.604404 kubelet[2959]: I1216 13:14:24.604368 2959 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-249" Dec 16 13:14:24.604865 kubelet[2959]: E1216 13:14:24.604770 2959 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.249:6443/api/v1/nodes\": dial tcp 172.31.28.249:6443: connect: connection refused" node="ip-172-31-28-249" Dec 16 13:14:24.778183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2880627251.mount: Deactivated successfully. Dec 16 13:14:24.794064 containerd[1981]: time="2025-12-16T13:14:24.794007627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:14:24.800513 containerd[1981]: time="2025-12-16T13:14:24.800447348Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 16 13:14:24.802898 containerd[1981]: time="2025-12-16T13:14:24.802404757Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:14:24.803015 kubelet[2959]: E1216 13:14:24.802838 2959 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.249:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-249&limit=500&resourceVersion=0\": dial tcp 172.31.28.249:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 13:14:24.804769 containerd[1981]: time="2025-12-16T13:14:24.804720465Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:14:24.808631 containerd[1981]: time="2025-12-16T13:14:24.808556395Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 13:14:24.810875 containerd[1981]: time="2025-12-16T13:14:24.810808150Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:14:24.814301 containerd[1981]: time="2025-12-16T13:14:24.814239237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:14:24.815091 containerd[1981]: time="2025-12-16T13:14:24.815051265Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 534.542444ms" Dec 16 13:14:24.815985 containerd[1981]: time="2025-12-16T13:14:24.815896170Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 13:14:24.818305 containerd[1981]: time="2025-12-16T13:14:24.818188972Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 546.0376ms" Dec 16 13:14:24.823374 containerd[1981]: time="2025-12-16T13:14:24.823292651Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 556.159139ms" Dec 16 13:14:24.883881 kubelet[2959]: E1216 13:14:24.883762 2959 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.249:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.249:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 13:14:24.980144 containerd[1981]: time="2025-12-16T13:14:24.980027599Z" level=info msg="connecting to shim 68dce3a276f572c4dc8158b4235464979b97b6c0a88beece3fab6e717e9446e7" address="unix:///run/containerd/s/649c79e2ffaa4c751086d3aa5c02795d8a871fca9679231d18ba45219ab58fd5" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:24.984851 containerd[1981]: time="2025-12-16T13:14:24.984757869Z" level=info msg="connecting to shim 07a0b2867767211fe47f89cd4702e253150ca9d6680846d8496e6c90967f6152" address="unix:///run/containerd/s/b2acb580873b84f97e4bc3bbb8f9e22f66d959e13cb1c2056cce3d6c039ffc4a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:24.993072 containerd[1981]: time="2025-12-16T13:14:24.992763574Z" level=info msg="connecting to shim 527deae728416d71bd7012fb06236cca13a2e41976017a9bfd5f96dd11c7530a" address="unix:///run/containerd/s/ef74835e8356bac2de5a19f95d72827cae06f2d6abb3d6b0600e09042bce5c33" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:25.091208 kubelet[2959]: E1216 13:14:25.091168 2959 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.249:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.249:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 13:14:25.091654 kubelet[2959]: E1216 13:14:25.091597 2959 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.249:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.249:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 13:14:25.105918 systemd[1]: Started cri-containerd-07a0b2867767211fe47f89cd4702e253150ca9d6680846d8496e6c90967f6152.scope - libcontainer container 07a0b2867767211fe47f89cd4702e253150ca9d6680846d8496e6c90967f6152. Dec 16 13:14:25.113463 systemd[1]: Started cri-containerd-527deae728416d71bd7012fb06236cca13a2e41976017a9bfd5f96dd11c7530a.scope - libcontainer container 527deae728416d71bd7012fb06236cca13a2e41976017a9bfd5f96dd11c7530a. Dec 16 13:14:25.116376 systemd[1]: Started cri-containerd-68dce3a276f572c4dc8158b4235464979b97b6c0a88beece3fab6e717e9446e7.scope - libcontainer container 68dce3a276f572c4dc8158b4235464979b97b6c0a88beece3fab6e717e9446e7. Dec 16 13:14:25.196975 containerd[1981]: time="2025-12-16T13:14:25.196837352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-249,Uid:7fae3f8311cee117f64b5e6b47c4667c,Namespace:kube-system,Attempt:0,} returns sandbox id \"68dce3a276f572c4dc8158b4235464979b97b6c0a88beece3fab6e717e9446e7\"" Dec 16 13:14:25.210935 containerd[1981]: time="2025-12-16T13:14:25.210638035Z" level=info msg="CreateContainer within sandbox \"68dce3a276f572c4dc8158b4235464979b97b6c0a88beece3fab6e717e9446e7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 13:14:25.220682 kubelet[2959]: E1216 13:14:25.220617 2959 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.249:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-249?timeout=10s\": dial tcp 172.31.28.249:6443: connect: connection refused" interval="1.6s" Dec 16 13:14:25.250175 containerd[1981]: time="2025-12-16T13:14:25.250132484Z" level=info msg="Container c09dd59ea8776f340b841d1bf5277ff5b925257c60bd0fd724bdf149d816860e: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:25.254617 containerd[1981]: time="2025-12-16T13:14:25.254553591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-249,Uid:1cee21d32483c57d7d364b90c85b72af,Namespace:kube-system,Attempt:0,} returns sandbox id \"07a0b2867767211fe47f89cd4702e253150ca9d6680846d8496e6c90967f6152\"" Dec 16 13:14:25.265185 containerd[1981]: time="2025-12-16T13:14:25.265075833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-249,Uid:400593ed3de343d463aaa2d9a6c933db,Namespace:kube-system,Attempt:0,} returns sandbox id \"527deae728416d71bd7012fb06236cca13a2e41976017a9bfd5f96dd11c7530a\"" Dec 16 13:14:25.268453 containerd[1981]: time="2025-12-16T13:14:25.268034168Z" level=info msg="CreateContainer within sandbox \"07a0b2867767211fe47f89cd4702e253150ca9d6680846d8496e6c90967f6152\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 13:14:25.286408 containerd[1981]: time="2025-12-16T13:14:25.286373046Z" level=info msg="CreateContainer within sandbox \"527deae728416d71bd7012fb06236cca13a2e41976017a9bfd5f96dd11c7530a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 13:14:25.286826 containerd[1981]: time="2025-12-16T13:14:25.286774125Z" level=info msg="CreateContainer within sandbox \"68dce3a276f572c4dc8158b4235464979b97b6c0a88beece3fab6e717e9446e7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c09dd59ea8776f340b841d1bf5277ff5b925257c60bd0fd724bdf149d816860e\"" Dec 16 13:14:25.287604 containerd[1981]: time="2025-12-16T13:14:25.287423811Z" level=info msg="StartContainer for \"c09dd59ea8776f340b841d1bf5277ff5b925257c60bd0fd724bdf149d816860e\"" Dec 16 13:14:25.289203 containerd[1981]: time="2025-12-16T13:14:25.289167501Z" level=info msg="connecting to shim c09dd59ea8776f340b841d1bf5277ff5b925257c60bd0fd724bdf149d816860e" address="unix:///run/containerd/s/649c79e2ffaa4c751086d3aa5c02795d8a871fca9679231d18ba45219ab58fd5" protocol=ttrpc version=3 Dec 16 13:14:25.289992 containerd[1981]: time="2025-12-16T13:14:25.289964925Z" level=info msg="Container 6076dd2dcef5ecac86bff25681924e337d9a6696d5e4c7360c6b8bad780d63eb: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:25.305550 containerd[1981]: time="2025-12-16T13:14:25.305404355Z" level=info msg="CreateContainer within sandbox \"07a0b2867767211fe47f89cd4702e253150ca9d6680846d8496e6c90967f6152\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6076dd2dcef5ecac86bff25681924e337d9a6696d5e4c7360c6b8bad780d63eb\"" Dec 16 13:14:25.307923 systemd[1]: Started cri-containerd-c09dd59ea8776f340b841d1bf5277ff5b925257c60bd0fd724bdf149d816860e.scope - libcontainer container c09dd59ea8776f340b841d1bf5277ff5b925257c60bd0fd724bdf149d816860e. Dec 16 13:14:25.309908 containerd[1981]: time="2025-12-16T13:14:25.307726364Z" level=info msg="StartContainer for \"6076dd2dcef5ecac86bff25681924e337d9a6696d5e4c7360c6b8bad780d63eb\"" Dec 16 13:14:25.315632 containerd[1981]: time="2025-12-16T13:14:25.315596409Z" level=info msg="Container ddf8b94922c56d531f0e9b68df7af19e3d7f9ced52b95ccfca91c05d10c9bbf2: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:25.318250 containerd[1981]: time="2025-12-16T13:14:25.318182979Z" level=info msg="connecting to shim 6076dd2dcef5ecac86bff25681924e337d9a6696d5e4c7360c6b8bad780d63eb" address="unix:///run/containerd/s/b2acb580873b84f97e4bc3bbb8f9e22f66d959e13cb1c2056cce3d6c039ffc4a" protocol=ttrpc version=3 Dec 16 13:14:25.340221 containerd[1981]: time="2025-12-16T13:14:25.340149167Z" level=info msg="CreateContainer within sandbox \"527deae728416d71bd7012fb06236cca13a2e41976017a9bfd5f96dd11c7530a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ddf8b94922c56d531f0e9b68df7af19e3d7f9ced52b95ccfca91c05d10c9bbf2\"" Dec 16 13:14:25.340945 containerd[1981]: time="2025-12-16T13:14:25.340911928Z" level=info msg="StartContainer for \"ddf8b94922c56d531f0e9b68df7af19e3d7f9ced52b95ccfca91c05d10c9bbf2\"" Dec 16 13:14:25.344577 containerd[1981]: time="2025-12-16T13:14:25.343777035Z" level=info msg="connecting to shim ddf8b94922c56d531f0e9b68df7af19e3d7f9ced52b95ccfca91c05d10c9bbf2" address="unix:///run/containerd/s/ef74835e8356bac2de5a19f95d72827cae06f2d6abb3d6b0600e09042bce5c33" protocol=ttrpc version=3 Dec 16 13:14:25.365881 systemd[1]: Started cri-containerd-6076dd2dcef5ecac86bff25681924e337d9a6696d5e4c7360c6b8bad780d63eb.scope - libcontainer container 6076dd2dcef5ecac86bff25681924e337d9a6696d5e4c7360c6b8bad780d63eb. Dec 16 13:14:25.392992 systemd[1]: Started cri-containerd-ddf8b94922c56d531f0e9b68df7af19e3d7f9ced52b95ccfca91c05d10c9bbf2.scope - libcontainer container ddf8b94922c56d531f0e9b68df7af19e3d7f9ced52b95ccfca91c05d10c9bbf2. Dec 16 13:14:25.410396 kubelet[2959]: I1216 13:14:25.409810 2959 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-249" Dec 16 13:14:25.412066 kubelet[2959]: E1216 13:14:25.412025 2959 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.249:6443/api/v1/nodes\": dial tcp 172.31.28.249:6443: connect: connection refused" node="ip-172-31-28-249" Dec 16 13:14:25.444324 containerd[1981]: time="2025-12-16T13:14:25.444282733Z" level=info msg="StartContainer for \"c09dd59ea8776f340b841d1bf5277ff5b925257c60bd0fd724bdf149d816860e\" returns successfully" Dec 16 13:14:25.493683 containerd[1981]: time="2025-12-16T13:14:25.492670793Z" level=info msg="StartContainer for \"6076dd2dcef5ecac86bff25681924e337d9a6696d5e4c7360c6b8bad780d63eb\" returns successfully" Dec 16 13:14:25.509378 containerd[1981]: time="2025-12-16T13:14:25.509336712Z" level=info msg="StartContainer for \"ddf8b94922c56d531f0e9b68df7af19e3d7f9ced52b95ccfca91c05d10c9bbf2\" returns successfully" Dec 16 13:14:25.699091 kubelet[2959]: E1216 13:14:25.698967 2959 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.249:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.249:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-249.1881b46270109494 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-249,UID:ip-172-31-28-249,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-249,},FirstTimestamp:2025-12-16 13:14:23.78587458 +0000 UTC m=+0.403188792,LastTimestamp:2025-12-16 13:14:23.78587458 +0000 UTC m=+0.403188792,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-249,}" Dec 16 13:14:25.757205 kubelet[2959]: E1216 13:14:25.756839 2959 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.28.249:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.249:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 13:14:25.862155 kubelet[2959]: E1216 13:14:25.862123 2959 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-249\" not found" node="ip-172-31-28-249" Dec 16 13:14:25.867498 kubelet[2959]: E1216 13:14:25.867048 2959 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-249\" not found" node="ip-172-31-28-249" Dec 16 13:14:25.873476 kubelet[2959]: E1216 13:14:25.873421 2959 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-249\" not found" node="ip-172-31-28-249" Dec 16 13:14:26.880029 kubelet[2959]: E1216 13:14:26.880002 2959 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-249\" not found" node="ip-172-31-28-249" Dec 16 13:14:26.880466 kubelet[2959]: E1216 13:14:26.880430 2959 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-249\" not found" node="ip-172-31-28-249" Dec 16 13:14:27.018629 kubelet[2959]: I1216 13:14:27.018602 2959 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-249" Dec 16 13:14:27.117263 kubelet[2959]: E1216 13:14:27.117231 2959 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-249\" not found" node="ip-172-31-28-249" Dec 16 13:14:28.091822 kubelet[2959]: E1216 13:14:28.091775 2959 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-249\" not found" node="ip-172-31-28-249" Dec 16 13:14:28.177655 kubelet[2959]: I1216 13:14:28.177329 2959 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-249" Dec 16 13:14:28.177655 kubelet[2959]: E1216 13:14:28.177363 2959 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ip-172-31-28-249\": node \"ip-172-31-28-249\" not found" Dec 16 13:14:28.205507 kubelet[2959]: I1216 13:14:28.205483 2959 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-249" Dec 16 13:14:28.231064 kubelet[2959]: E1216 13:14:28.231035 2959 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-249\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-28-249" Dec 16 13:14:28.231389 kubelet[2959]: I1216 13:14:28.231221 2959 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-249" Dec 16 13:14:28.233009 kubelet[2959]: E1216 13:14:28.232985 2959 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-28-249\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-28-249" Dec 16 13:14:28.233251 kubelet[2959]: I1216 13:14:28.233104 2959 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-249" Dec 16 13:14:28.234795 kubelet[2959]: E1216 13:14:28.234773 2959 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-249\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-28-249" Dec 16 13:14:28.773873 kubelet[2959]: I1216 13:14:28.773826 2959 apiserver.go:52] "Watching apiserver" Dec 16 13:14:28.810438 kubelet[2959]: I1216 13:14:28.810387 2959 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 13:14:30.418332 systemd[1]: Reload requested from client PID 3244 ('systemctl') (unit session-9.scope)... Dec 16 13:14:30.418350 systemd[1]: Reloading... Dec 16 13:14:30.541607 zram_generator::config[3288]: No configuration found. Dec 16 13:14:30.599502 kubelet[2959]: I1216 13:14:30.599411 2959 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-249" Dec 16 13:14:30.826657 systemd[1]: Reloading finished in 407 ms. Dec 16 13:14:30.863120 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:14:30.880949 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 13:14:30.881260 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:14:30.881342 systemd[1]: kubelet.service: Consumed 787ms CPU time, 122.5M memory peak. Dec 16 13:14:30.883707 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:14:31.215140 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:14:31.227981 (kubelet)[3348]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:14:31.336158 kubelet[3348]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:14:31.337260 kubelet[3348]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:14:31.338911 kubelet[3348]: I1216 13:14:31.338273 3348 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:14:31.353741 kubelet[3348]: I1216 13:14:31.353705 3348 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 13:14:31.353940 kubelet[3348]: I1216 13:14:31.353926 3348 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:14:31.361965 kubelet[3348]: I1216 13:14:31.361819 3348 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 13:14:31.362667 kubelet[3348]: I1216 13:14:31.362151 3348 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:14:31.363089 kubelet[3348]: I1216 13:14:31.363071 3348 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 13:14:31.365826 kubelet[3348]: I1216 13:14:31.365791 3348 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 13:14:31.374583 kubelet[3348]: I1216 13:14:31.373997 3348 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:14:31.399310 kubelet[3348]: I1216 13:14:31.399281 3348 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:14:31.403465 kubelet[3348]: I1216 13:14:31.403442 3348 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 13:14:31.406585 kubelet[3348]: I1216 13:14:31.405518 3348 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:14:31.406870 kubelet[3348]: I1216 13:14:31.406706 3348 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-249","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:14:31.407189 kubelet[3348]: I1216 13:14:31.406995 3348 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:14:31.407189 kubelet[3348]: I1216 13:14:31.407009 3348 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 13:14:31.407189 kubelet[3348]: I1216 13:14:31.407038 3348 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 13:14:31.408846 kubelet[3348]: I1216 13:14:31.408817 3348 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:14:31.409370 kubelet[3348]: I1216 13:14:31.409341 3348 kubelet.go:475] "Attempting to sync node with API server" Dec 16 13:14:31.409608 kubelet[3348]: I1216 13:14:31.409572 3348 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:14:31.412601 kubelet[3348]: I1216 13:14:31.412518 3348 kubelet.go:387] "Adding apiserver pod source" Dec 16 13:14:31.412601 kubelet[3348]: I1216 13:14:31.412545 3348 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:14:31.417586 kubelet[3348]: I1216 13:14:31.416959 3348 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:14:31.417586 kubelet[3348]: I1216 13:14:31.417411 3348 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 13:14:31.417586 kubelet[3348]: I1216 13:14:31.417437 3348 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 13:14:31.422239 kubelet[3348]: I1216 13:14:31.422184 3348 server.go:1262] "Started kubelet" Dec 16 13:14:31.424707 kubelet[3348]: I1216 13:14:31.424688 3348 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:14:31.441854 kubelet[3348]: I1216 13:14:31.441691 3348 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:14:31.447156 kubelet[3348]: I1216 13:14:31.445275 3348 server.go:310] "Adding debug handlers to kubelet server" Dec 16 13:14:31.453987 kubelet[3348]: I1216 13:14:31.453369 3348 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:14:31.453987 kubelet[3348]: I1216 13:14:31.453439 3348 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 13:14:31.453987 kubelet[3348]: I1216 13:14:31.453756 3348 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:14:31.454146 kubelet[3348]: I1216 13:14:31.454017 3348 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:14:31.458633 kubelet[3348]: I1216 13:14:31.457452 3348 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 13:14:31.458633 kubelet[3348]: E1216 13:14:31.457806 3348 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-28-249\" not found" Dec 16 13:14:31.458771 kubelet[3348]: I1216 13:14:31.458688 3348 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 13:14:31.458805 kubelet[3348]: I1216 13:14:31.458801 3348 reconciler.go:29] "Reconciler: start to sync state" Dec 16 13:14:31.471784 kubelet[3348]: I1216 13:14:31.470739 3348 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:14:31.475665 kubelet[3348]: I1216 13:14:31.475474 3348 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 13:14:31.478274 kubelet[3348]: E1216 13:14:31.478116 3348 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:14:31.478906 kubelet[3348]: I1216 13:14:31.478886 3348 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 13:14:31.479893 kubelet[3348]: I1216 13:14:31.479003 3348 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 13:14:31.479893 kubelet[3348]: I1216 13:14:31.479026 3348 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 13:14:31.480079 kubelet[3348]: E1216 13:14:31.480037 3348 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:14:31.482435 kubelet[3348]: I1216 13:14:31.482394 3348 factory.go:223] Registration of the containerd container factory successfully Dec 16 13:14:31.482435 kubelet[3348]: I1216 13:14:31.482415 3348 factory.go:223] Registration of the systemd container factory successfully Dec 16 13:14:31.546778 kubelet[3348]: I1216 13:14:31.546749 3348 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:14:31.546778 kubelet[3348]: I1216 13:14:31.546766 3348 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:14:31.546778 kubelet[3348]: I1216 13:14:31.546789 3348 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:14:31.546994 kubelet[3348]: I1216 13:14:31.546940 3348 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 13:14:31.546994 kubelet[3348]: I1216 13:14:31.546953 3348 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 13:14:31.546994 kubelet[3348]: I1216 13:14:31.546973 3348 policy_none.go:49] "None policy: Start" Dec 16 13:14:31.546994 kubelet[3348]: I1216 13:14:31.546986 3348 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 13:14:31.547158 kubelet[3348]: I1216 13:14:31.546998 3348 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 13:14:31.549209 kubelet[3348]: I1216 13:14:31.547533 3348 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 16 13:14:31.549209 kubelet[3348]: I1216 13:14:31.547596 3348 policy_none.go:47] "Start" Dec 16 13:14:31.561760 kubelet[3348]: E1216 13:14:31.561724 3348 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 13:14:31.561989 kubelet[3348]: I1216 13:14:31.561933 3348 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:14:31.561989 kubelet[3348]: I1216 13:14:31.561949 3348 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:14:31.565285 kubelet[3348]: I1216 13:14:31.564982 3348 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:14:31.565285 kubelet[3348]: E1216 13:14:31.565250 3348 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:14:31.583374 kubelet[3348]: I1216 13:14:31.583312 3348 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-249" Dec 16 13:14:31.585347 kubelet[3348]: I1216 13:14:31.583769 3348 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-249" Dec 16 13:14:31.585347 kubelet[3348]: I1216 13:14:31.584126 3348 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-249" Dec 16 13:14:31.601685 kubelet[3348]: E1216 13:14:31.601646 3348 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-249\" already exists" pod="kube-system/kube-scheduler-ip-172-31-28-249" Dec 16 13:14:31.684749 kubelet[3348]: I1216 13:14:31.684726 3348 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-249" Dec 16 13:14:31.698903 kubelet[3348]: I1216 13:14:31.698845 3348 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-28-249" Dec 16 13:14:31.699046 kubelet[3348]: I1216 13:14:31.698916 3348 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-249" Dec 16 13:14:31.760491 kubelet[3348]: I1216 13:14:31.760376 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7fae3f8311cee117f64b5e6b47c4667c-ca-certs\") pod \"kube-apiserver-ip-172-31-28-249\" (UID: \"7fae3f8311cee117f64b5e6b47c4667c\") " pod="kube-system/kube-apiserver-ip-172-31-28-249" Dec 16 13:14:31.760491 kubelet[3348]: I1216 13:14:31.760412 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7fae3f8311cee117f64b5e6b47c4667c-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-249\" (UID: \"7fae3f8311cee117f64b5e6b47c4667c\") " pod="kube-system/kube-apiserver-ip-172-31-28-249" Dec 16 13:14:31.760491 kubelet[3348]: I1216 13:14:31.760428 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1cee21d32483c57d7d364b90c85b72af-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-249\" (UID: \"1cee21d32483c57d7d364b90c85b72af\") " pod="kube-system/kube-controller-manager-ip-172-31-28-249" Dec 16 13:14:31.760491 kubelet[3348]: I1216 13:14:31.760445 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1cee21d32483c57d7d364b90c85b72af-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-249\" (UID: \"1cee21d32483c57d7d364b90c85b72af\") " pod="kube-system/kube-controller-manager-ip-172-31-28-249" Dec 16 13:14:31.760491 kubelet[3348]: I1216 13:14:31.760466 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/400593ed3de343d463aaa2d9a6c933db-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-249\" (UID: \"400593ed3de343d463aaa2d9a6c933db\") " pod="kube-system/kube-scheduler-ip-172-31-28-249" Dec 16 13:14:31.762346 kubelet[3348]: I1216 13:14:31.762271 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7fae3f8311cee117f64b5e6b47c4667c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-249\" (UID: \"7fae3f8311cee117f64b5e6b47c4667c\") " pod="kube-system/kube-apiserver-ip-172-31-28-249" Dec 16 13:14:31.762346 kubelet[3348]: I1216 13:14:31.762320 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1cee21d32483c57d7d364b90c85b72af-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-249\" (UID: \"1cee21d32483c57d7d364b90c85b72af\") " pod="kube-system/kube-controller-manager-ip-172-31-28-249" Dec 16 13:14:31.762346 kubelet[3348]: I1216 13:14:31.762335 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1cee21d32483c57d7d364b90c85b72af-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-249\" (UID: \"1cee21d32483c57d7d364b90c85b72af\") " pod="kube-system/kube-controller-manager-ip-172-31-28-249" Dec 16 13:14:31.762346 kubelet[3348]: I1216 13:14:31.762360 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1cee21d32483c57d7d364b90c85b72af-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-249\" (UID: \"1cee21d32483c57d7d364b90c85b72af\") " pod="kube-system/kube-controller-manager-ip-172-31-28-249" Dec 16 13:14:32.418484 kubelet[3348]: I1216 13:14:32.416758 3348 apiserver.go:52] "Watching apiserver" Dec 16 13:14:32.459864 kubelet[3348]: I1216 13:14:32.459826 3348 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 13:14:32.519370 kubelet[3348]: I1216 13:14:32.519311 3348 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-249" Dec 16 13:14:32.521215 kubelet[3348]: I1216 13:14:32.521185 3348 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-249" Dec 16 13:14:32.532704 kubelet[3348]: E1216 13:14:32.532666 3348 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-249\" already exists" pod="kube-system/kube-apiserver-ip-172-31-28-249" Dec 16 13:14:32.534324 kubelet[3348]: E1216 13:14:32.534292 3348 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-249\" already exists" pod="kube-system/kube-scheduler-ip-172-31-28-249" Dec 16 13:14:32.568644 kubelet[3348]: I1216 13:14:32.568413 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-249" podStartSLOduration=2.568399341 podStartE2EDuration="2.568399341s" podCreationTimestamp="2025-12-16 13:14:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:14:32.568145889 +0000 UTC m=+1.330378869" watchObservedRunningTime="2025-12-16 13:14:32.568399341 +0000 UTC m=+1.330632315" Dec 16 13:14:32.586085 kubelet[3348]: I1216 13:14:32.585875 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-249" podStartSLOduration=1.585860179 podStartE2EDuration="1.585860179s" podCreationTimestamp="2025-12-16 13:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:14:32.583300813 +0000 UTC m=+1.345533794" watchObservedRunningTime="2025-12-16 13:14:32.585860179 +0000 UTC m=+1.348093138" Dec 16 13:14:32.612026 kubelet[3348]: I1216 13:14:32.611971 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-249" podStartSLOduration=1.6119309130000001 podStartE2EDuration="1.611930913s" podCreationTimestamp="2025-12-16 13:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:14:32.599757241 +0000 UTC m=+1.361990219" watchObservedRunningTime="2025-12-16 13:14:32.611930913 +0000 UTC m=+1.374163885" Dec 16 13:14:33.373246 update_engine[1963]: I20251216 13:14:33.372613 1963 update_attempter.cc:509] Updating boot flags... Dec 16 13:14:36.370599 kubelet[3348]: I1216 13:14:36.370516 3348 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 13:14:36.371262 kubelet[3348]: I1216 13:14:36.371085 3348 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 13:14:36.371385 containerd[1981]: time="2025-12-16T13:14:36.370885880Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 13:14:37.076496 systemd[1]: Created slice kubepods-besteffort-pod9fc65cb1_bfae_4dfe_90bd_f4c388e8af22.slice - libcontainer container kubepods-besteffort-pod9fc65cb1_bfae_4dfe_90bd_f4c388e8af22.slice. Dec 16 13:14:37.098763 kubelet[3348]: I1216 13:14:37.098730 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrzpm\" (UniqueName: \"kubernetes.io/projected/9fc65cb1-bfae-4dfe-90bd-f4c388e8af22-kube-api-access-lrzpm\") pod \"kube-proxy-6ttxd\" (UID: \"9fc65cb1-bfae-4dfe-90bd-f4c388e8af22\") " pod="kube-system/kube-proxy-6ttxd" Dec 16 13:14:37.098900 kubelet[3348]: I1216 13:14:37.098882 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9fc65cb1-bfae-4dfe-90bd-f4c388e8af22-kube-proxy\") pod \"kube-proxy-6ttxd\" (UID: \"9fc65cb1-bfae-4dfe-90bd-f4c388e8af22\") " pod="kube-system/kube-proxy-6ttxd" Dec 16 13:14:37.098962 kubelet[3348]: I1216 13:14:37.098912 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fc65cb1-bfae-4dfe-90bd-f4c388e8af22-xtables-lock\") pod \"kube-proxy-6ttxd\" (UID: \"9fc65cb1-bfae-4dfe-90bd-f4c388e8af22\") " pod="kube-system/kube-proxy-6ttxd" Dec 16 13:14:37.100911 kubelet[3348]: I1216 13:14:37.100879 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fc65cb1-bfae-4dfe-90bd-f4c388e8af22-lib-modules\") pod \"kube-proxy-6ttxd\" (UID: \"9fc65cb1-bfae-4dfe-90bd-f4c388e8af22\") " pod="kube-system/kube-proxy-6ttxd" Dec 16 13:14:37.389258 containerd[1981]: time="2025-12-16T13:14:37.389192088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6ttxd,Uid:9fc65cb1-bfae-4dfe-90bd-f4c388e8af22,Namespace:kube-system,Attempt:0,}" Dec 16 13:14:37.432468 containerd[1981]: time="2025-12-16T13:14:37.432414980Z" level=info msg="connecting to shim 8ebeb8b30f71e173c06b8f8caed9a66f78a2d0d7f5f5ec5dd1ef8bbcac031128" address="unix:///run/containerd/s/95315d80512c7dbeac4dc5237c7232fbd445be8be03deb629224ea3185026c6e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:37.466800 systemd[1]: Started cri-containerd-8ebeb8b30f71e173c06b8f8caed9a66f78a2d0d7f5f5ec5dd1ef8bbcac031128.scope - libcontainer container 8ebeb8b30f71e173c06b8f8caed9a66f78a2d0d7f5f5ec5dd1ef8bbcac031128. Dec 16 13:14:37.528921 containerd[1981]: time="2025-12-16T13:14:37.528881429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6ttxd,Uid:9fc65cb1-bfae-4dfe-90bd-f4c388e8af22,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ebeb8b30f71e173c06b8f8caed9a66f78a2d0d7f5f5ec5dd1ef8bbcac031128\"" Dec 16 13:14:37.543598 containerd[1981]: time="2025-12-16T13:14:37.543536736Z" level=info msg="CreateContainer within sandbox \"8ebeb8b30f71e173c06b8f8caed9a66f78a2d0d7f5f5ec5dd1ef8bbcac031128\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 13:14:37.571621 containerd[1981]: time="2025-12-16T13:14:37.570938993Z" level=info msg="Container 8a27b75cbc8245523dd3d938f3b546334192e8ec35e724bef22ebec9cf9acb59: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:37.582350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2237543778.mount: Deactivated successfully. Dec 16 13:14:37.595149 systemd[1]: Created slice kubepods-besteffort-podc8c4912d_3b5a_4542_a91b_105c563a5599.slice - libcontainer container kubepods-besteffort-podc8c4912d_3b5a_4542_a91b_105c563a5599.slice. Dec 16 13:14:37.600580 containerd[1981]: time="2025-12-16T13:14:37.598896731Z" level=info msg="CreateContainer within sandbox \"8ebeb8b30f71e173c06b8f8caed9a66f78a2d0d7f5f5ec5dd1ef8bbcac031128\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8a27b75cbc8245523dd3d938f3b546334192e8ec35e724bef22ebec9cf9acb59\"" Dec 16 13:14:37.601994 containerd[1981]: time="2025-12-16T13:14:37.601958416Z" level=info msg="StartContainer for \"8a27b75cbc8245523dd3d938f3b546334192e8ec35e724bef22ebec9cf9acb59\"" Dec 16 13:14:37.604768 containerd[1981]: time="2025-12-16T13:14:37.604732701Z" level=info msg="connecting to shim 8a27b75cbc8245523dd3d938f3b546334192e8ec35e724bef22ebec9cf9acb59" address="unix:///run/containerd/s/95315d80512c7dbeac4dc5237c7232fbd445be8be03deb629224ea3185026c6e" protocol=ttrpc version=3 Dec 16 13:14:37.605253 kubelet[3348]: I1216 13:14:37.605211 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c8c4912d-3b5a-4542-a91b-105c563a5599-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-t85z9\" (UID: \"c8c4912d-3b5a-4542-a91b-105c563a5599\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-t85z9" Dec 16 13:14:37.605739 kubelet[3348]: I1216 13:14:37.605717 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cztt\" (UniqueName: \"kubernetes.io/projected/c8c4912d-3b5a-4542-a91b-105c563a5599-kube-api-access-4cztt\") pod \"tigera-operator-65cdcdfd6d-t85z9\" (UID: \"c8c4912d-3b5a-4542-a91b-105c563a5599\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-t85z9" Dec 16 13:14:37.636818 systemd[1]: Started cri-containerd-8a27b75cbc8245523dd3d938f3b546334192e8ec35e724bef22ebec9cf9acb59.scope - libcontainer container 8a27b75cbc8245523dd3d938f3b546334192e8ec35e724bef22ebec9cf9acb59. Dec 16 13:14:37.730849 containerd[1981]: time="2025-12-16T13:14:37.730161563Z" level=info msg="StartContainer for \"8a27b75cbc8245523dd3d938f3b546334192e8ec35e724bef22ebec9cf9acb59\" returns successfully" Dec 16 13:14:37.908162 containerd[1981]: time="2025-12-16T13:14:37.908031385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-t85z9,Uid:c8c4912d-3b5a-4542-a91b-105c563a5599,Namespace:tigera-operator,Attempt:0,}" Dec 16 13:14:37.937920 containerd[1981]: time="2025-12-16T13:14:37.937873638Z" level=info msg="connecting to shim 27b55db2a6eaa54457ce0fa7b5159786a52be630a4ec2f0e0796e49081a44d0a" address="unix:///run/containerd/s/a93111817493a2d75262f055121b8b55db0a28c7cbee108c2c03d9bc04279ff4" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:37.973902 systemd[1]: Started cri-containerd-27b55db2a6eaa54457ce0fa7b5159786a52be630a4ec2f0e0796e49081a44d0a.scope - libcontainer container 27b55db2a6eaa54457ce0fa7b5159786a52be630a4ec2f0e0796e49081a44d0a. Dec 16 13:14:38.034391 containerd[1981]: time="2025-12-16T13:14:38.033829888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-t85z9,Uid:c8c4912d-3b5a-4542-a91b-105c563a5599,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"27b55db2a6eaa54457ce0fa7b5159786a52be630a4ec2f0e0796e49081a44d0a\"" Dec 16 13:14:38.036017 containerd[1981]: time="2025-12-16T13:14:38.035949554Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 13:14:38.588442 kubelet[3348]: I1216 13:14:38.588375 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6ttxd" podStartSLOduration=1.588337562 podStartE2EDuration="1.588337562s" podCreationTimestamp="2025-12-16 13:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:14:38.572166755 +0000 UTC m=+7.334399738" watchObservedRunningTime="2025-12-16 13:14:38.588337562 +0000 UTC m=+7.350570542" Dec 16 13:14:39.577849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4292465114.mount: Deactivated successfully. Dec 16 13:14:40.670650 containerd[1981]: time="2025-12-16T13:14:40.670572118Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:40.671818 containerd[1981]: time="2025-12-16T13:14:40.671593668Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Dec 16 13:14:40.673048 containerd[1981]: time="2025-12-16T13:14:40.673008466Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:40.675709 containerd[1981]: time="2025-12-16T13:14:40.675668984Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:40.676423 containerd[1981]: time="2025-12-16T13:14:40.676397957Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.640254985s" Dec 16 13:14:40.676578 containerd[1981]: time="2025-12-16T13:14:40.676531660Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 13:14:40.682831 containerd[1981]: time="2025-12-16T13:14:40.682789213Z" level=info msg="CreateContainer within sandbox \"27b55db2a6eaa54457ce0fa7b5159786a52be630a4ec2f0e0796e49081a44d0a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 13:14:40.696804 containerd[1981]: time="2025-12-16T13:14:40.696769367Z" level=info msg="Container a71f66a55c7f5117e701900cdd2e9356901d221ad236c5dd7fea0fbcc4cf34d3: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:40.704315 containerd[1981]: time="2025-12-16T13:14:40.704275659Z" level=info msg="CreateContainer within sandbox \"27b55db2a6eaa54457ce0fa7b5159786a52be630a4ec2f0e0796e49081a44d0a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a71f66a55c7f5117e701900cdd2e9356901d221ad236c5dd7fea0fbcc4cf34d3\"" Dec 16 13:14:40.705000 containerd[1981]: time="2025-12-16T13:14:40.704925266Z" level=info msg="StartContainer for \"a71f66a55c7f5117e701900cdd2e9356901d221ad236c5dd7fea0fbcc4cf34d3\"" Dec 16 13:14:40.705959 containerd[1981]: time="2025-12-16T13:14:40.705928450Z" level=info msg="connecting to shim a71f66a55c7f5117e701900cdd2e9356901d221ad236c5dd7fea0fbcc4cf34d3" address="unix:///run/containerd/s/a93111817493a2d75262f055121b8b55db0a28c7cbee108c2c03d9bc04279ff4" protocol=ttrpc version=3 Dec 16 13:14:40.731791 systemd[1]: Started cri-containerd-a71f66a55c7f5117e701900cdd2e9356901d221ad236c5dd7fea0fbcc4cf34d3.scope - libcontainer container a71f66a55c7f5117e701900cdd2e9356901d221ad236c5dd7fea0fbcc4cf34d3. Dec 16 13:14:40.764434 containerd[1981]: time="2025-12-16T13:14:40.764399390Z" level=info msg="StartContainer for \"a71f66a55c7f5117e701900cdd2e9356901d221ad236c5dd7fea0fbcc4cf34d3\" returns successfully" Dec 16 13:14:41.580510 kubelet[3348]: I1216 13:14:41.580336 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-t85z9" podStartSLOduration=1.9382484020000001 podStartE2EDuration="4.580321447s" podCreationTimestamp="2025-12-16 13:14:37 +0000 UTC" firstStartedPulling="2025-12-16 13:14:38.035253926 +0000 UTC m=+6.797486886" lastFinishedPulling="2025-12-16 13:14:40.67732697 +0000 UTC m=+9.439559931" observedRunningTime="2025-12-16 13:14:41.580114105 +0000 UTC m=+10.342347099" watchObservedRunningTime="2025-12-16 13:14:41.580321447 +0000 UTC m=+10.342554427" Dec 16 13:14:47.752797 sudo[2392]: pam_unix(sudo:session): session closed for user root Dec 16 13:14:47.775784 sshd[2391]: Connection closed by 139.178.68.195 port 59320 Dec 16 13:14:47.779330 sshd-session[2388]: pam_unix(sshd:session): session closed for user core Dec 16 13:14:47.787196 systemd[1]: sshd@8-172.31.28.249:22-139.178.68.195:59320.service: Deactivated successfully. Dec 16 13:14:47.793790 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 13:14:47.794789 systemd[1]: session-9.scope: Consumed 6.585s CPU time, 156.4M memory peak. Dec 16 13:14:47.801765 systemd-logind[1962]: Session 9 logged out. Waiting for processes to exit. Dec 16 13:14:47.805993 systemd-logind[1962]: Removed session 9. Dec 16 13:14:54.375691 systemd[1]: Created slice kubepods-besteffort-pod387c7342_95b9_4dbd_ba92_e0bc16ccb9f5.slice - libcontainer container kubepods-besteffort-pod387c7342_95b9_4dbd_ba92_e0bc16ccb9f5.slice. Dec 16 13:14:54.433993 kubelet[3348]: I1216 13:14:54.433926 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/387c7342-95b9-4dbd-ba92-e0bc16ccb9f5-tigera-ca-bundle\") pod \"calico-typha-6dc8c86d87-hhv48\" (UID: \"387c7342-95b9-4dbd-ba92-e0bc16ccb9f5\") " pod="calico-system/calico-typha-6dc8c86d87-hhv48" Dec 16 13:14:54.433993 kubelet[3348]: I1216 13:14:54.433972 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlz95\" (UniqueName: \"kubernetes.io/projected/387c7342-95b9-4dbd-ba92-e0bc16ccb9f5-kube-api-access-nlz95\") pod \"calico-typha-6dc8c86d87-hhv48\" (UID: \"387c7342-95b9-4dbd-ba92-e0bc16ccb9f5\") " pod="calico-system/calico-typha-6dc8c86d87-hhv48" Dec 16 13:14:54.433993 kubelet[3348]: I1216 13:14:54.433991 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/387c7342-95b9-4dbd-ba92-e0bc16ccb9f5-typha-certs\") pod \"calico-typha-6dc8c86d87-hhv48\" (UID: \"387c7342-95b9-4dbd-ba92-e0bc16ccb9f5\") " pod="calico-system/calico-typha-6dc8c86d87-hhv48" Dec 16 13:14:54.592729 systemd[1]: Created slice kubepods-besteffort-podb691b6e2_8a7d_43b8_9fed_8eb8036d002b.slice - libcontainer container kubepods-besteffort-podb691b6e2_8a7d_43b8_9fed_8eb8036d002b.slice. Dec 16 13:14:54.636016 kubelet[3348]: I1216 13:14:54.634903 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b691b6e2-8a7d-43b8-9fed-8eb8036d002b-lib-modules\") pod \"calico-node-xm5gw\" (UID: \"b691b6e2-8a7d-43b8-9fed-8eb8036d002b\") " pod="calico-system/calico-node-xm5gw" Dec 16 13:14:54.636016 kubelet[3348]: I1216 13:14:54.634944 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b691b6e2-8a7d-43b8-9fed-8eb8036d002b-xtables-lock\") pod \"calico-node-xm5gw\" (UID: \"b691b6e2-8a7d-43b8-9fed-8eb8036d002b\") " pod="calico-system/calico-node-xm5gw" Dec 16 13:14:54.636016 kubelet[3348]: I1216 13:14:54.634970 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b691b6e2-8a7d-43b8-9fed-8eb8036d002b-flexvol-driver-host\") pod \"calico-node-xm5gw\" (UID: \"b691b6e2-8a7d-43b8-9fed-8eb8036d002b\") " pod="calico-system/calico-node-xm5gw" Dec 16 13:14:54.636016 kubelet[3348]: I1216 13:14:54.635003 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b691b6e2-8a7d-43b8-9fed-8eb8036d002b-cni-bin-dir\") pod \"calico-node-xm5gw\" (UID: \"b691b6e2-8a7d-43b8-9fed-8eb8036d002b\") " pod="calico-system/calico-node-xm5gw" Dec 16 13:14:54.636016 kubelet[3348]: I1216 13:14:54.635023 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b691b6e2-8a7d-43b8-9fed-8eb8036d002b-node-certs\") pod \"calico-node-xm5gw\" (UID: \"b691b6e2-8a7d-43b8-9fed-8eb8036d002b\") " pod="calico-system/calico-node-xm5gw" Dec 16 13:14:54.636310 kubelet[3348]: I1216 13:14:54.635045 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b691b6e2-8a7d-43b8-9fed-8eb8036d002b-policysync\") pod \"calico-node-xm5gw\" (UID: \"b691b6e2-8a7d-43b8-9fed-8eb8036d002b\") " pod="calico-system/calico-node-xm5gw" Dec 16 13:14:54.636310 kubelet[3348]: I1216 13:14:54.635067 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b691b6e2-8a7d-43b8-9fed-8eb8036d002b-var-run-calico\") pod \"calico-node-xm5gw\" (UID: \"b691b6e2-8a7d-43b8-9fed-8eb8036d002b\") " pod="calico-system/calico-node-xm5gw" Dec 16 13:14:54.636310 kubelet[3348]: I1216 13:14:54.635106 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx6nl\" (UniqueName: \"kubernetes.io/projected/b691b6e2-8a7d-43b8-9fed-8eb8036d002b-kube-api-access-rx6nl\") pod \"calico-node-xm5gw\" (UID: \"b691b6e2-8a7d-43b8-9fed-8eb8036d002b\") " pod="calico-system/calico-node-xm5gw" Dec 16 13:14:54.636310 kubelet[3348]: I1216 13:14:54.635146 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b691b6e2-8a7d-43b8-9fed-8eb8036d002b-tigera-ca-bundle\") pod \"calico-node-xm5gw\" (UID: \"b691b6e2-8a7d-43b8-9fed-8eb8036d002b\") " pod="calico-system/calico-node-xm5gw" Dec 16 13:14:54.636310 kubelet[3348]: I1216 13:14:54.635178 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b691b6e2-8a7d-43b8-9fed-8eb8036d002b-cni-net-dir\") pod \"calico-node-xm5gw\" (UID: \"b691b6e2-8a7d-43b8-9fed-8eb8036d002b\") " pod="calico-system/calico-node-xm5gw" Dec 16 13:14:54.636507 kubelet[3348]: I1216 13:14:54.635209 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b691b6e2-8a7d-43b8-9fed-8eb8036d002b-cni-log-dir\") pod \"calico-node-xm5gw\" (UID: \"b691b6e2-8a7d-43b8-9fed-8eb8036d002b\") " pod="calico-system/calico-node-xm5gw" Dec 16 13:14:54.636507 kubelet[3348]: I1216 13:14:54.635232 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b691b6e2-8a7d-43b8-9fed-8eb8036d002b-var-lib-calico\") pod \"calico-node-xm5gw\" (UID: \"b691b6e2-8a7d-43b8-9fed-8eb8036d002b\") " pod="calico-system/calico-node-xm5gw" Dec 16 13:14:54.686945 containerd[1981]: time="2025-12-16T13:14:54.686183060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6dc8c86d87-hhv48,Uid:387c7342-95b9-4dbd-ba92-e0bc16ccb9f5,Namespace:calico-system,Attempt:0,}" Dec 16 13:14:54.737605 containerd[1981]: time="2025-12-16T13:14:54.736162849Z" level=info msg="connecting to shim da0f742d3bc32234585da9c2d5af821594c2c289f3ca47d6be814e9890a2e0bf" address="unix:///run/containerd/s/71cb7f5f36e7e23d20bb2c487f1ac7782b38122a48c32735568808f272538cc2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:54.749145 kubelet[3348]: E1216 13:14:54.748914 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.749145 kubelet[3348]: W1216 13:14:54.748949 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.749145 kubelet[3348]: E1216 13:14:54.748986 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.750582 kubelet[3348]: E1216 13:14:54.749680 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.750582 kubelet[3348]: W1216 13:14:54.749696 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.750582 kubelet[3348]: E1216 13:14:54.749716 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.753787 kubelet[3348]: E1216 13:14:54.752641 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.753787 kubelet[3348]: W1216 13:14:54.753676 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.753787 kubelet[3348]: E1216 13:14:54.753719 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.755332 kubelet[3348]: E1216 13:14:54.755202 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.757305 kubelet[3348]: W1216 13:14:54.755551 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.757305 kubelet[3348]: E1216 13:14:54.757263 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.761215 kubelet[3348]: E1216 13:14:54.760982 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.761215 kubelet[3348]: W1216 13:14:54.761033 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.761215 kubelet[3348]: E1216 13:14:54.761055 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.774589 kubelet[3348]: E1216 13:14:54.773697 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.774589 kubelet[3348]: W1216 13:14:54.773722 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.774589 kubelet[3348]: E1216 13:14:54.773747 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.777810 kubelet[3348]: E1216 13:14:54.777366 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.777810 kubelet[3348]: W1216 13:14:54.777393 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.777810 kubelet[3348]: E1216 13:14:54.777416 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.781338 kubelet[3348]: E1216 13:14:54.780702 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.781338 kubelet[3348]: W1216 13:14:54.780724 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.781338 kubelet[3348]: E1216 13:14:54.780749 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.786134 kubelet[3348]: E1216 13:14:54.784946 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.786134 kubelet[3348]: W1216 13:14:54.784969 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.786134 kubelet[3348]: E1216 13:14:54.784992 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.787247 kubelet[3348]: E1216 13:14:54.787222 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.788348 kubelet[3348]: W1216 13:14:54.787243 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.788348 kubelet[3348]: E1216 13:14:54.787703 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.791592 kubelet[3348]: E1216 13:14:54.789660 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.791592 kubelet[3348]: W1216 13:14:54.789678 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.791592 kubelet[3348]: E1216 13:14:54.789699 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.824046 systemd[1]: Started cri-containerd-da0f742d3bc32234585da9c2d5af821594c2c289f3ca47d6be814e9890a2e0bf.scope - libcontainer container da0f742d3bc32234585da9c2d5af821594c2c289f3ca47d6be814e9890a2e0bf. Dec 16 13:14:54.875948 kubelet[3348]: E1216 13:14:54.875901 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cncts" podUID="bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6" Dec 16 13:14:54.904162 containerd[1981]: time="2025-12-16T13:14:54.903140922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xm5gw,Uid:b691b6e2-8a7d-43b8-9fed-8eb8036d002b,Namespace:calico-system,Attempt:0,}" Dec 16 13:14:54.908909 kubelet[3348]: E1216 13:14:54.908878 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.908909 kubelet[3348]: W1216 13:14:54.908906 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.909109 kubelet[3348]: E1216 13:14:54.908931 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.912793 kubelet[3348]: E1216 13:14:54.912727 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.912793 kubelet[3348]: W1216 13:14:54.912757 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.912793 kubelet[3348]: E1216 13:14:54.912783 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.913545 kubelet[3348]: E1216 13:14:54.913089 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.913545 kubelet[3348]: W1216 13:14:54.913101 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.913545 kubelet[3348]: E1216 13:14:54.913115 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.913545 kubelet[3348]: E1216 13:14:54.913369 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.913545 kubelet[3348]: W1216 13:14:54.913379 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.913545 kubelet[3348]: E1216 13:14:54.913395 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.913905 kubelet[3348]: E1216 13:14:54.913654 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.913905 kubelet[3348]: W1216 13:14:54.913665 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.913905 kubelet[3348]: E1216 13:14:54.913677 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.913905 kubelet[3348]: E1216 13:14:54.913879 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.913905 kubelet[3348]: W1216 13:14:54.913888 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.913905 kubelet[3348]: E1216 13:14:54.913899 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.915174 kubelet[3348]: E1216 13:14:54.914948 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.915174 kubelet[3348]: W1216 13:14:54.914966 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.915174 kubelet[3348]: E1216 13:14:54.914981 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.916387 kubelet[3348]: E1216 13:14:54.916287 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.917086 kubelet[3348]: W1216 13:14:54.916742 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.917086 kubelet[3348]: E1216 13:14:54.916769 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.918430 kubelet[3348]: E1216 13:14:54.918351 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.919032 kubelet[3348]: W1216 13:14:54.918668 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.919161 kubelet[3348]: E1216 13:14:54.919146 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.920035 kubelet[3348]: E1216 13:14:54.919935 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.920523 kubelet[3348]: W1216 13:14:54.920260 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.920523 kubelet[3348]: E1216 13:14:54.920284 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.921768 kubelet[3348]: E1216 13:14:54.921591 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.921768 kubelet[3348]: W1216 13:14:54.921652 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.922185 kubelet[3348]: E1216 13:14:54.921670 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.923034 kubelet[3348]: E1216 13:14:54.923015 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.923286 kubelet[3348]: W1216 13:14:54.923124 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.923286 kubelet[3348]: E1216 13:14:54.923146 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.924724 kubelet[3348]: E1216 13:14:54.924698 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.924724 kubelet[3348]: W1216 13:14:54.924717 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.924854 kubelet[3348]: E1216 13:14:54.924734 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.924966 kubelet[3348]: E1216 13:14:54.924952 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.925032 kubelet[3348]: W1216 13:14:54.924966 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.925032 kubelet[3348]: E1216 13:14:54.924979 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.925273 kubelet[3348]: E1216 13:14:54.925260 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.925273 kubelet[3348]: W1216 13:14:54.925273 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.925378 kubelet[3348]: E1216 13:14:54.925287 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.928041 kubelet[3348]: E1216 13:14:54.927720 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.928041 kubelet[3348]: W1216 13:14:54.927738 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.928041 kubelet[3348]: E1216 13:14:54.927755 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.930219 kubelet[3348]: E1216 13:14:54.928701 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.930219 kubelet[3348]: W1216 13:14:54.928718 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.930219 kubelet[3348]: E1216 13:14:54.928732 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.930575 kubelet[3348]: E1216 13:14:54.930536 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.930746 kubelet[3348]: W1216 13:14:54.930550 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.930746 kubelet[3348]: E1216 13:14:54.930652 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.930916 kubelet[3348]: E1216 13:14:54.930905 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.931004 kubelet[3348]: W1216 13:14:54.930991 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.931189 kubelet[3348]: E1216 13:14:54.931080 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.931310 kubelet[3348]: E1216 13:14:54.931300 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.931393 kubelet[3348]: W1216 13:14:54.931363 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.931393 kubelet[3348]: E1216 13:14:54.931383 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.941509 kubelet[3348]: E1216 13:14:54.941232 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.941509 kubelet[3348]: W1216 13:14:54.941258 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.941509 kubelet[3348]: E1216 13:14:54.941284 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.941509 kubelet[3348]: I1216 13:14:54.941358 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6-kubelet-dir\") pod \"csi-node-driver-cncts\" (UID: \"bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6\") " pod="calico-system/csi-node-driver-cncts" Dec 16 13:14:54.942695 kubelet[3348]: E1216 13:14:54.942670 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.942695 kubelet[3348]: W1216 13:14:54.942693 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.942834 kubelet[3348]: E1216 13:14:54.942712 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.943029 kubelet[3348]: E1216 13:14:54.943013 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.943084 kubelet[3348]: W1216 13:14:54.943031 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.943084 kubelet[3348]: E1216 13:14:54.943045 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.943414 kubelet[3348]: E1216 13:14:54.943396 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.943414 kubelet[3348]: W1216 13:14:54.943413 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.943507 kubelet[3348]: E1216 13:14:54.943427 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.943645 kubelet[3348]: I1216 13:14:54.943614 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6-varrun\") pod \"csi-node-driver-cncts\" (UID: \"bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6\") " pod="calico-system/csi-node-driver-cncts" Dec 16 13:14:54.944648 kubelet[3348]: E1216 13:14:54.944624 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.944648 kubelet[3348]: W1216 13:14:54.944647 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.945437 kubelet[3348]: E1216 13:14:54.944666 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.945437 kubelet[3348]: E1216 13:14:54.945432 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.945526 kubelet[3348]: W1216 13:14:54.945445 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.945526 kubelet[3348]: E1216 13:14:54.945461 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.945799 kubelet[3348]: E1216 13:14:54.945781 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.945799 kubelet[3348]: W1216 13:14:54.945795 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.945901 kubelet[3348]: E1216 13:14:54.945809 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.947251 kubelet[3348]: I1216 13:14:54.946638 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c6xf\" (UniqueName: \"kubernetes.io/projected/bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6-kube-api-access-2c6xf\") pod \"csi-node-driver-cncts\" (UID: \"bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6\") " pod="calico-system/csi-node-driver-cncts" Dec 16 13:14:54.947251 kubelet[3348]: E1216 13:14:54.946943 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.947251 kubelet[3348]: W1216 13:14:54.946956 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.947251 kubelet[3348]: E1216 13:14:54.946968 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.947741 kubelet[3348]: E1216 13:14:54.947721 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.947741 kubelet[3348]: W1216 13:14:54.947741 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.948846 kubelet[3348]: E1216 13:14:54.947755 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.948846 kubelet[3348]: E1216 13:14:54.948653 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.948846 kubelet[3348]: W1216 13:14:54.948664 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.948846 kubelet[3348]: E1216 13:14:54.948678 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.948846 kubelet[3348]: I1216 13:14:54.948706 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6-registration-dir\") pod \"csi-node-driver-cncts\" (UID: \"bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6\") " pod="calico-system/csi-node-driver-cncts" Dec 16 13:14:54.949091 kubelet[3348]: E1216 13:14:54.948946 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.949091 kubelet[3348]: W1216 13:14:54.948959 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.949091 kubelet[3348]: E1216 13:14:54.948971 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.949091 kubelet[3348]: I1216 13:14:54.949005 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6-socket-dir\") pod \"csi-node-driver-cncts\" (UID: \"bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6\") " pod="calico-system/csi-node-driver-cncts" Dec 16 13:14:54.949824 kubelet[3348]: E1216 13:14:54.949738 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.949892 kubelet[3348]: W1216 13:14:54.949846 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.949892 kubelet[3348]: E1216 13:14:54.949862 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.951117 kubelet[3348]: E1216 13:14:54.950881 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.951117 kubelet[3348]: W1216 13:14:54.950896 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.951117 kubelet[3348]: E1216 13:14:54.950910 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.951268 kubelet[3348]: E1216 13:14:54.951159 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.951268 kubelet[3348]: W1216 13:14:54.951170 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.951268 kubelet[3348]: E1216 13:14:54.951184 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.951815 kubelet[3348]: E1216 13:14:54.951794 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:54.951815 kubelet[3348]: W1216 13:14:54.951814 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:54.951917 kubelet[3348]: E1216 13:14:54.951829 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:54.968901 containerd[1981]: time="2025-12-16T13:14:54.968851476Z" level=info msg="connecting to shim e928fedf34e30be4dc7c80a01a6d8dcc4962eb773cd22b249ee8a9a57fd700d3" address="unix:///run/containerd/s/209995473715b04fec5374fea3eb495d63c07a251a73b18a9294f7d499136e41" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:14:55.044924 containerd[1981]: time="2025-12-16T13:14:55.044868465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6dc8c86d87-hhv48,Uid:387c7342-95b9-4dbd-ba92-e0bc16ccb9f5,Namespace:calico-system,Attempt:0,} returns sandbox id \"da0f742d3bc32234585da9c2d5af821594c2c289f3ca47d6be814e9890a2e0bf\"" Dec 16 13:14:55.046115 systemd[1]: Started cri-containerd-e928fedf34e30be4dc7c80a01a6d8dcc4962eb773cd22b249ee8a9a57fd700d3.scope - libcontainer container e928fedf34e30be4dc7c80a01a6d8dcc4962eb773cd22b249ee8a9a57fd700d3. Dec 16 13:14:55.047314 containerd[1981]: time="2025-12-16T13:14:55.047276211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 13:14:55.051711 kubelet[3348]: E1216 13:14:55.051682 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:55.051711 kubelet[3348]: W1216 13:14:55.051709 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:55.051854 kubelet[3348]: E1216 13:14:55.051732 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:55.052443 kubelet[3348]: E1216 13:14:55.052358 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:55.052443 kubelet[3348]: W1216 13:14:55.052379 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:55.052443 kubelet[3348]: E1216 13:14:55.052398 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:55.053412 kubelet[3348]: E1216 13:14:55.053387 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:55.053412 kubelet[3348]: W1216 13:14:55.053404 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:55.053534 kubelet[3348]: E1216 13:14:55.053422 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:55.054952 kubelet[3348]: E1216 13:14:55.054929 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:55.054952 kubelet[3348]: W1216 13:14:55.054946 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:55.054952 kubelet[3348]: E1216 13:14:55.054963 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:55.055241 kubelet[3348]: E1216 13:14:55.055227 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:55.055417 kubelet[3348]: W1216 13:14:55.055346 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:55.055417 kubelet[3348]: E1216 13:14:55.055366 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:55.055963 kubelet[3348]: E1216 13:14:55.055814 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:55.055963 kubelet[3348]: W1216 13:14:55.055838 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:55.055963 kubelet[3348]: E1216 13:14:55.055854 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:55.056313 kubelet[3348]: E1216 13:14:55.056232 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:55.056406 kubelet[3348]: W1216 13:14:55.056390 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:55.056483 kubelet[3348]: E1216 13:14:55.056470 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:55.056929 kubelet[3348]: E1216 13:14:55.056914 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:55.057127 kubelet[3348]: W1216 13:14:55.057007 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:55.057127 kubelet[3348]: E1216 13:14:55.057026 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:55.057272 kubelet[3348]: E1216 13:14:55.057243 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:55.057272 kubelet[3348]: W1216 13:14:55.057268 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:55.057369 kubelet[3348]: E1216 13:14:55.057283 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:55.057591 kubelet[3348]: E1216 13:14:55.057540 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:55.057659 kubelet[3348]: W1216 13:14:55.057554 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:55.057659 kubelet[3348]: E1216 13:14:55.057632 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:55.057927 kubelet[3348]: E1216 13:14:55.057909 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:55.057927 kubelet[3348]: W1216 13:14:55.057923 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:55.058040 kubelet[3348]: E1216 13:14:55.057936 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:55.058869 kubelet[3348]: E1216 13:14:55.058851 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:55.058869 kubelet[3348]: W1216 13:14:55.058865 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:55.059037 kubelet[3348]: E1216 13:14:55.058879 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:55.059203 kubelet[3348]: E1216 13:14:55.059182 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:55.059203 kubelet[3348]: W1216 13:14:55.059192 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:55.059356 kubelet[3348]: E1216 13:14:55.059205 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:55.059435 kubelet[3348]: E1216 13:14:55.059417 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:55.059435 kubelet[3348]: W1216 13:14:55.059426 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:55.059687 kubelet[3348]: E1216 13:14:55.059438 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:55.059848 kubelet[3348]: E1216 13:14:55.059816 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:55.059848 kubelet[3348]: W1216 13:14:55.059832 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:55.059848 kubelet[3348]: E1216 13:14:55.059846 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:55.060524 kubelet[3348]: E1216 13:14:55.060270 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:55.060524 kubelet[3348]: W1216 13:14:55.060284 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:55.060524 kubelet[3348]: E1216 13:14:55.060297 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:55.061036 kubelet[3348]: E1216 13:14:55.061018 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:55.061036 kubelet[3348]: W1216 13:14:55.061033 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:55.061157 kubelet[3348]: E1216 13:14:55.061046 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:55.061340 kubelet[3348]: E1216 13:14:55.061323 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:55.061340 kubelet[3348]: W1216 13:14:55.061338 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:55.061442 kubelet[3348]: E1216 13:14:55.061353 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:55.061642 kubelet[3348]: E1216 13:14:55.061624 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:55.061642 kubelet[3348]: W1216 13:14:55.061638 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:55.061749 kubelet[3348]: E1216 13:14:55.061652 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:55.061926 kubelet[3348]: E1216 13:14:55.061909 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:55.061994 kubelet[3348]: W1216 13:14:55.061925 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:55.061994 kubelet[3348]: E1216 13:14:55.061939 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:55.062218 kubelet[3348]: E1216 13:14:55.062201 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:55.062218 kubelet[3348]: W1216 13:14:55.062215 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:55.062360 kubelet[3348]: E1216 13:14:55.062228 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:55.062908 kubelet[3348]: E1216 13:14:55.062863 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:55.062908 kubelet[3348]: W1216 13:14:55.062879 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:55.062908 kubelet[3348]: E1216 13:14:55.062893 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:55.063344 kubelet[3348]: E1216 13:14:55.063275 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:55.063344 kubelet[3348]: W1216 13:14:55.063299 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:55.063344 kubelet[3348]: E1216 13:14:55.063314 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:55.064661 kubelet[3348]: E1216 13:14:55.064635 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:55.064661 kubelet[3348]: W1216 13:14:55.064659 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:55.064849 kubelet[3348]: E1216 13:14:55.064675 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:55.064990 kubelet[3348]: E1216 13:14:55.064943 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:55.064990 kubelet[3348]: W1216 13:14:55.064958 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:55.064990 kubelet[3348]: E1216 13:14:55.064972 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:55.082714 kubelet[3348]: E1216 13:14:55.082631 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:55.082714 kubelet[3348]: W1216 13:14:55.082654 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:55.082714 kubelet[3348]: E1216 13:14:55.082677 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:55.099793 containerd[1981]: time="2025-12-16T13:14:55.099706539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xm5gw,Uid:b691b6e2-8a7d-43b8-9fed-8eb8036d002b,Namespace:calico-system,Attempt:0,} returns sandbox id \"e928fedf34e30be4dc7c80a01a6d8dcc4962eb773cd22b249ee8a9a57fd700d3\"" Dec 16 13:14:56.400300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3117501585.mount: Deactivated successfully. Dec 16 13:14:56.482738 kubelet[3348]: E1216 13:14:56.479699 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cncts" podUID="bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6" Dec 16 13:14:57.419631 containerd[1981]: time="2025-12-16T13:14:57.419577055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:57.421460 containerd[1981]: time="2025-12-16T13:14:57.421307172Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Dec 16 13:14:57.423980 containerd[1981]: time="2025-12-16T13:14:57.423937554Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:57.427391 containerd[1981]: time="2025-12-16T13:14:57.427350569Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:57.428356 containerd[1981]: time="2025-12-16T13:14:57.428138694Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.380819336s" Dec 16 13:14:57.428356 containerd[1981]: time="2025-12-16T13:14:57.428165885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 13:14:57.435792 containerd[1981]: time="2025-12-16T13:14:57.435755462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 13:14:57.462409 containerd[1981]: time="2025-12-16T13:14:57.462347821Z" level=info msg="CreateContainer within sandbox \"da0f742d3bc32234585da9c2d5af821594c2c289f3ca47d6be814e9890a2e0bf\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 13:14:57.478911 containerd[1981]: time="2025-12-16T13:14:57.478865552Z" level=info msg="Container 695885980af25e89976d7a51ba349b269549f0474079dd7271594edff0a51728: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:57.499958 containerd[1981]: time="2025-12-16T13:14:57.499884784Z" level=info msg="CreateContainer within sandbox \"da0f742d3bc32234585da9c2d5af821594c2c289f3ca47d6be814e9890a2e0bf\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"695885980af25e89976d7a51ba349b269549f0474079dd7271594edff0a51728\"" Dec 16 13:14:57.501025 containerd[1981]: time="2025-12-16T13:14:57.500860299Z" level=info msg="StartContainer for \"695885980af25e89976d7a51ba349b269549f0474079dd7271594edff0a51728\"" Dec 16 13:14:57.503750 containerd[1981]: time="2025-12-16T13:14:57.503716238Z" level=info msg="connecting to shim 695885980af25e89976d7a51ba349b269549f0474079dd7271594edff0a51728" address="unix:///run/containerd/s/71cb7f5f36e7e23d20bb2c487f1ac7782b38122a48c32735568808f272538cc2" protocol=ttrpc version=3 Dec 16 13:14:57.530809 systemd[1]: Started cri-containerd-695885980af25e89976d7a51ba349b269549f0474079dd7271594edff0a51728.scope - libcontainer container 695885980af25e89976d7a51ba349b269549f0474079dd7271594edff0a51728. Dec 16 13:14:57.630587 containerd[1981]: time="2025-12-16T13:14:57.630493165Z" level=info msg="StartContainer for \"695885980af25e89976d7a51ba349b269549f0474079dd7271594edff0a51728\" returns successfully" Dec 16 13:14:58.479402 kubelet[3348]: E1216 13:14:58.479358 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cncts" podUID="bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6" Dec 16 13:14:58.670152 kubelet[3348]: E1216 13:14:58.670117 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.670300 kubelet[3348]: W1216 13:14:58.670165 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.671424 kubelet[3348]: E1216 13:14:58.671255 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.671917 kubelet[3348]: E1216 13:14:58.671895 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.671917 kubelet[3348]: W1216 13:14:58.671917 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.672168 kubelet[3348]: E1216 13:14:58.671945 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.672281 kubelet[3348]: E1216 13:14:58.672259 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.672344 kubelet[3348]: W1216 13:14:58.672280 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.672344 kubelet[3348]: E1216 13:14:58.672294 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.674352 kubelet[3348]: I1216 13:14:58.671783 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6dc8c86d87-hhv48" podStartSLOduration=2.283024901 podStartE2EDuration="4.671756032s" podCreationTimestamp="2025-12-16 13:14:54 +0000 UTC" firstStartedPulling="2025-12-16 13:14:55.046609736 +0000 UTC m=+23.808842694" lastFinishedPulling="2025-12-16 13:14:57.435340866 +0000 UTC m=+26.197573825" observedRunningTime="2025-12-16 13:14:58.658518372 +0000 UTC m=+27.420751353" watchObservedRunningTime="2025-12-16 13:14:58.671756032 +0000 UTC m=+27.433989014" Dec 16 13:14:58.675696 kubelet[3348]: E1216 13:14:58.675677 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.675783 kubelet[3348]: W1216 13:14:58.675703 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.675783 kubelet[3348]: E1216 13:14:58.675723 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.676289 kubelet[3348]: E1216 13:14:58.676223 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.676289 kubelet[3348]: W1216 13:14:58.676238 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.676289 kubelet[3348]: E1216 13:14:58.676254 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.676607 kubelet[3348]: E1216 13:14:58.676591 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.676607 kubelet[3348]: W1216 13:14:58.676607 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.676735 kubelet[3348]: E1216 13:14:58.676621 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.676936 kubelet[3348]: E1216 13:14:58.676898 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.676936 kubelet[3348]: W1216 13:14:58.676933 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.677044 kubelet[3348]: E1216 13:14:58.676947 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.677268 kubelet[3348]: E1216 13:14:58.677253 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.677329 kubelet[3348]: W1216 13:14:58.677268 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.677329 kubelet[3348]: E1216 13:14:58.677281 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.677549 kubelet[3348]: E1216 13:14:58.677536 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.677643 kubelet[3348]: W1216 13:14:58.677550 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.677810 kubelet[3348]: E1216 13:14:58.677586 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.678022 kubelet[3348]: E1216 13:14:58.678008 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.678086 kubelet[3348]: W1216 13:14:58.678023 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.678086 kubelet[3348]: E1216 13:14:58.678037 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.678309 kubelet[3348]: E1216 13:14:58.678294 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.678366 kubelet[3348]: W1216 13:14:58.678335 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.678366 kubelet[3348]: E1216 13:14:58.678350 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.678949 kubelet[3348]: E1216 13:14:58.678843 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.678949 kubelet[3348]: W1216 13:14:58.678866 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.678949 kubelet[3348]: E1216 13:14:58.678879 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.679387 kubelet[3348]: E1216 13:14:58.679184 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.679387 kubelet[3348]: W1216 13:14:58.679200 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.679387 kubelet[3348]: E1216 13:14:58.679247 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.679701 kubelet[3348]: E1216 13:14:58.679687 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.679764 kubelet[3348]: W1216 13:14:58.679702 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.679764 kubelet[3348]: E1216 13:14:58.679716 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.680146 kubelet[3348]: E1216 13:14:58.680130 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.680201 kubelet[3348]: W1216 13:14:58.680147 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.680201 kubelet[3348]: E1216 13:14:58.680163 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.690584 kubelet[3348]: E1216 13:14:58.690529 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.690584 kubelet[3348]: W1216 13:14:58.690578 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.690802 kubelet[3348]: E1216 13:14:58.690603 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.691130 kubelet[3348]: E1216 13:14:58.690901 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.691130 kubelet[3348]: W1216 13:14:58.690913 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.691130 kubelet[3348]: E1216 13:14:58.690926 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.691295 kubelet[3348]: E1216 13:14:58.691200 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.691295 kubelet[3348]: W1216 13:14:58.691209 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.691295 kubelet[3348]: E1216 13:14:58.691223 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.691571 kubelet[3348]: E1216 13:14:58.691530 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.691571 kubelet[3348]: W1216 13:14:58.691546 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.691682 kubelet[3348]: E1216 13:14:58.691580 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.691877 kubelet[3348]: E1216 13:14:58.691812 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.691877 kubelet[3348]: W1216 13:14:58.691824 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.691877 kubelet[3348]: E1216 13:14:58.691836 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.692253 kubelet[3348]: E1216 13:14:58.692049 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.693299 kubelet[3348]: W1216 13:14:58.693270 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.693381 kubelet[3348]: E1216 13:14:58.693303 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.693835 kubelet[3348]: E1216 13:14:58.693809 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.693835 kubelet[3348]: W1216 13:14:58.693827 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.693960 kubelet[3348]: E1216 13:14:58.693841 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.694504 kubelet[3348]: E1216 13:14:58.694487 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.694779 kubelet[3348]: W1216 13:14:58.694505 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.694779 kubelet[3348]: E1216 13:14:58.694519 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.695473 kubelet[3348]: E1216 13:14:58.695456 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.695473 kubelet[3348]: W1216 13:14:58.695473 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.695617 kubelet[3348]: E1216 13:14:58.695488 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.696992 kubelet[3348]: E1216 13:14:58.696913 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.696992 kubelet[3348]: W1216 13:14:58.696927 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.696992 kubelet[3348]: E1216 13:14:58.696942 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.697591 kubelet[3348]: E1216 13:14:58.697358 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.697591 kubelet[3348]: W1216 13:14:58.697373 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.697591 kubelet[3348]: E1216 13:14:58.697386 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.697763 kubelet[3348]: E1216 13:14:58.697661 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.697763 kubelet[3348]: W1216 13:14:58.697672 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.697763 kubelet[3348]: E1216 13:14:58.697685 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.698059 kubelet[3348]: E1216 13:14:58.697925 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.698059 kubelet[3348]: W1216 13:14:58.697936 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.698059 kubelet[3348]: E1216 13:14:58.697947 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.698497 kubelet[3348]: E1216 13:14:58.698236 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.698497 kubelet[3348]: W1216 13:14:58.698246 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.698497 kubelet[3348]: E1216 13:14:58.698259 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.698654 kubelet[3348]: E1216 13:14:58.698505 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.698654 kubelet[3348]: W1216 13:14:58.698518 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.698654 kubelet[3348]: E1216 13:14:58.698531 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.699186 kubelet[3348]: E1216 13:14:58.699160 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.699186 kubelet[3348]: W1216 13:14:58.699176 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.699288 kubelet[3348]: E1216 13:14:58.699190 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.699875 kubelet[3348]: E1216 13:14:58.699806 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.699875 kubelet[3348]: W1216 13:14:58.699821 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.699875 kubelet[3348]: E1216 13:14:58.699834 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.700648 kubelet[3348]: E1216 13:14:58.700594 3348 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 13:14:58.700648 kubelet[3348]: W1216 13:14:58.700608 3348 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 13:14:58.700648 kubelet[3348]: E1216 13:14:58.700622 3348 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 13:14:58.747654 containerd[1981]: time="2025-12-16T13:14:58.747513705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:58.750937 containerd[1981]: time="2025-12-16T13:14:58.750754167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Dec 16 13:14:58.753082 containerd[1981]: time="2025-12-16T13:14:58.753040964Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:58.756715 containerd[1981]: time="2025-12-16T13:14:58.756673052Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:14:58.757780 containerd[1981]: time="2025-12-16T13:14:58.757244126Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.321448967s" Dec 16 13:14:58.757780 containerd[1981]: time="2025-12-16T13:14:58.757285430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 13:14:58.765528 containerd[1981]: time="2025-12-16T13:14:58.765477063Z" level=info msg="CreateContainer within sandbox \"e928fedf34e30be4dc7c80a01a6d8dcc4962eb773cd22b249ee8a9a57fd700d3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 13:14:58.808289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1091379680.mount: Deactivated successfully. Dec 16 13:14:58.809028 containerd[1981]: time="2025-12-16T13:14:58.808705769Z" level=info msg="Container ab53f62c66d4b4ab657074d4f8a8b4d1734a0f3ad099a95ad1ec7696160a6de8: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:14:58.823994 containerd[1981]: time="2025-12-16T13:14:58.823950899Z" level=info msg="CreateContainer within sandbox \"e928fedf34e30be4dc7c80a01a6d8dcc4962eb773cd22b249ee8a9a57fd700d3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ab53f62c66d4b4ab657074d4f8a8b4d1734a0f3ad099a95ad1ec7696160a6de8\"" Dec 16 13:14:58.824697 containerd[1981]: time="2025-12-16T13:14:58.824659718Z" level=info msg="StartContainer for \"ab53f62c66d4b4ab657074d4f8a8b4d1734a0f3ad099a95ad1ec7696160a6de8\"" Dec 16 13:14:58.826129 containerd[1981]: time="2025-12-16T13:14:58.826100695Z" level=info msg="connecting to shim ab53f62c66d4b4ab657074d4f8a8b4d1734a0f3ad099a95ad1ec7696160a6de8" address="unix:///run/containerd/s/209995473715b04fec5374fea3eb495d63c07a251a73b18a9294f7d499136e41" protocol=ttrpc version=3 Dec 16 13:14:58.851794 systemd[1]: Started cri-containerd-ab53f62c66d4b4ab657074d4f8a8b4d1734a0f3ad099a95ad1ec7696160a6de8.scope - libcontainer container ab53f62c66d4b4ab657074d4f8a8b4d1734a0f3ad099a95ad1ec7696160a6de8. Dec 16 13:14:58.941666 containerd[1981]: time="2025-12-16T13:14:58.941357811Z" level=info msg="StartContainer for \"ab53f62c66d4b4ab657074d4f8a8b4d1734a0f3ad099a95ad1ec7696160a6de8\" returns successfully" Dec 16 13:14:58.953224 systemd[1]: cri-containerd-ab53f62c66d4b4ab657074d4f8a8b4d1734a0f3ad099a95ad1ec7696160a6de8.scope: Deactivated successfully. Dec 16 13:14:58.978202 containerd[1981]: time="2025-12-16T13:14:58.978147578Z" level=info msg="received container exit event container_id:\"ab53f62c66d4b4ab657074d4f8a8b4d1734a0f3ad099a95ad1ec7696160a6de8\" id:\"ab53f62c66d4b4ab657074d4f8a8b4d1734a0f3ad099a95ad1ec7696160a6de8\" pid:4215 exited_at:{seconds:1765890898 nanos:958108066}" Dec 16 13:14:59.011474 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab53f62c66d4b4ab657074d4f8a8b4d1734a0f3ad099a95ad1ec7696160a6de8-rootfs.mount: Deactivated successfully. Dec 16 13:14:59.633767 kubelet[3348]: I1216 13:14:59.633554 3348 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 13:14:59.636896 containerd[1981]: time="2025-12-16T13:14:59.636396574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 13:15:00.480094 kubelet[3348]: E1216 13:15:00.479910 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cncts" podUID="bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6" Dec 16 13:15:02.480988 kubelet[3348]: E1216 13:15:02.480410 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cncts" podUID="bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6" Dec 16 13:15:04.492491 kubelet[3348]: E1216 13:15:04.491943 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cncts" podUID="bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6" Dec 16 13:15:05.123215 containerd[1981]: time="2025-12-16T13:15:05.123163219Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:15:05.125187 containerd[1981]: time="2025-12-16T13:15:05.124994504Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Dec 16 13:15:05.127512 containerd[1981]: time="2025-12-16T13:15:05.127459068Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:15:05.131082 containerd[1981]: time="2025-12-16T13:15:05.131017580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:15:05.131757 containerd[1981]: time="2025-12-16T13:15:05.131646124Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.495211467s" Dec 16 13:15:05.131757 containerd[1981]: time="2025-12-16T13:15:05.131674274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 13:15:05.147047 containerd[1981]: time="2025-12-16T13:15:05.146994199Z" level=info msg="CreateContainer within sandbox \"e928fedf34e30be4dc7c80a01a6d8dcc4962eb773cd22b249ee8a9a57fd700d3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 13:15:05.180599 containerd[1981]: time="2025-12-16T13:15:05.179983626Z" level=info msg="Container 6494133df0805764cc1921a7557f9c96290f3cacae88cb20a2bf1e45751ac8d6: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:15:05.226011 containerd[1981]: time="2025-12-16T13:15:05.225947716Z" level=info msg="CreateContainer within sandbox \"e928fedf34e30be4dc7c80a01a6d8dcc4962eb773cd22b249ee8a9a57fd700d3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6494133df0805764cc1921a7557f9c96290f3cacae88cb20a2bf1e45751ac8d6\"" Dec 16 13:15:05.227613 containerd[1981]: time="2025-12-16T13:15:05.226705762Z" level=info msg="StartContainer for \"6494133df0805764cc1921a7557f9c96290f3cacae88cb20a2bf1e45751ac8d6\"" Dec 16 13:15:05.228533 containerd[1981]: time="2025-12-16T13:15:05.228499661Z" level=info msg="connecting to shim 6494133df0805764cc1921a7557f9c96290f3cacae88cb20a2bf1e45751ac8d6" address="unix:///run/containerd/s/209995473715b04fec5374fea3eb495d63c07a251a73b18a9294f7d499136e41" protocol=ttrpc version=3 Dec 16 13:15:05.259130 systemd[1]: Started cri-containerd-6494133df0805764cc1921a7557f9c96290f3cacae88cb20a2bf1e45751ac8d6.scope - libcontainer container 6494133df0805764cc1921a7557f9c96290f3cacae88cb20a2bf1e45751ac8d6. Dec 16 13:15:05.337270 containerd[1981]: time="2025-12-16T13:15:05.337152671Z" level=info msg="StartContainer for \"6494133df0805764cc1921a7557f9c96290f3cacae88cb20a2bf1e45751ac8d6\" returns successfully" Dec 16 13:15:06.404793 systemd[1]: cri-containerd-6494133df0805764cc1921a7557f9c96290f3cacae88cb20a2bf1e45751ac8d6.scope: Deactivated successfully. Dec 16 13:15:06.405684 systemd[1]: cri-containerd-6494133df0805764cc1921a7557f9c96290f3cacae88cb20a2bf1e45751ac8d6.scope: Consumed 627ms CPU time, 160M memory peak, 712K read from disk, 171.3M written to disk. Dec 16 13:15:06.454228 containerd[1981]: time="2025-12-16T13:15:06.454177964Z" level=info msg="received container exit event container_id:\"6494133df0805764cc1921a7557f9c96290f3cacae88cb20a2bf1e45751ac8d6\" id:\"6494133df0805764cc1921a7557f9c96290f3cacae88cb20a2bf1e45751ac8d6\" pid:4278 exited_at:{seconds:1765890906 nanos:424121109}" Dec 16 13:15:06.480943 kubelet[3348]: E1216 13:15:06.480812 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cncts" podUID="bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6" Dec 16 13:15:06.491331 kubelet[3348]: I1216 13:15:06.489420 3348 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 16 13:15:06.570554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6494133df0805764cc1921a7557f9c96290f3cacae88cb20a2bf1e45751ac8d6-rootfs.mount: Deactivated successfully. Dec 16 13:15:06.598770 systemd[1]: Created slice kubepods-besteffort-podeb021275_6ff0_4073_a285_a44761b754c0.slice - libcontainer container kubepods-besteffort-podeb021275_6ff0_4073_a285_a44761b754c0.slice. Dec 16 13:15:06.622405 systemd[1]: Created slice kubepods-burstable-pod5e30039f_976c_4f39_a91e_eac0996660a4.slice - libcontainer container kubepods-burstable-pod5e30039f_976c_4f39_a91e_eac0996660a4.slice. Dec 16 13:15:06.633620 systemd[1]: Created slice kubepods-besteffort-pod283d7557_65a8_4b3b_9bfa_2489f569eafb.slice - libcontainer container kubepods-besteffort-pod283d7557_65a8_4b3b_9bfa_2489f569eafb.slice. Dec 16 13:15:06.648004 kubelet[3348]: I1216 13:15:06.647958 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bvj7\" (UniqueName: \"kubernetes.io/projected/5e30039f-976c-4f39-a91e-eac0996660a4-kube-api-access-4bvj7\") pod \"coredns-66bc5c9577-mpggz\" (UID: \"5e30039f-976c-4f39-a91e-eac0996660a4\") " pod="kube-system/coredns-66bc5c9577-mpggz" Dec 16 13:15:06.648193 kubelet[3348]: I1216 13:15:06.648026 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/849d73a2-70ae-4c16-a2df-5353f11e5191-goldmane-key-pair\") pod \"goldmane-7c778bb748-mtpks\" (UID: \"849d73a2-70ae-4c16-a2df-5353f11e5191\") " pod="calico-system/goldmane-7c778bb748-mtpks" Dec 16 13:15:06.648193 kubelet[3348]: I1216 13:15:06.648148 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkscx\" (UniqueName: \"kubernetes.io/projected/d1b7644f-3acf-411e-a5e8-2f3cc85e178b-kube-api-access-hkscx\") pod \"calico-apiserver-5dbb4c8d86-x8rs5\" (UID: \"d1b7644f-3acf-411e-a5e8-2f3cc85e178b\") " pod="calico-apiserver/calico-apiserver-5dbb4c8d86-x8rs5" Dec 16 13:15:06.648193 kubelet[3348]: I1216 13:15:06.648175 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e30039f-976c-4f39-a91e-eac0996660a4-config-volume\") pod \"coredns-66bc5c9577-mpggz\" (UID: \"5e30039f-976c-4f39-a91e-eac0996660a4\") " pod="kube-system/coredns-66bc5c9577-mpggz" Dec 16 13:15:06.648343 kubelet[3348]: I1216 13:15:06.648206 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kxq6\" (UniqueName: \"kubernetes.io/projected/283d7557-65a8-4b3b-9bfa-2489f569eafb-kube-api-access-4kxq6\") pod \"calico-kube-controllers-6cf79d7c7c-wlbjc\" (UID: \"283d7557-65a8-4b3b-9bfa-2489f569eafb\") " pod="calico-system/calico-kube-controllers-6cf79d7c7c-wlbjc" Dec 16 13:15:06.648343 kubelet[3348]: I1216 13:15:06.648234 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eb021275-6ff0-4073-a285-a44761b754c0-whisker-backend-key-pair\") pod \"whisker-79c5d89b7b-5j6pc\" (UID: \"eb021275-6ff0-4073-a285-a44761b754c0\") " pod="calico-system/whisker-79c5d89b7b-5j6pc" Dec 16 13:15:06.648343 kubelet[3348]: I1216 13:15:06.648276 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phxlt\" (UniqueName: \"kubernetes.io/projected/97ebb483-74aa-4963-b528-353f8ea2fd10-kube-api-access-phxlt\") pod \"calico-apiserver-5dbb4c8d86-dk448\" (UID: \"97ebb483-74aa-4963-b528-353f8ea2fd10\") " pod="calico-apiserver/calico-apiserver-5dbb4c8d86-dk448" Dec 16 13:15:06.648343 kubelet[3348]: I1216 13:15:06.648304 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/849d73a2-70ae-4c16-a2df-5353f11e5191-config\") pod \"goldmane-7c778bb748-mtpks\" (UID: \"849d73a2-70ae-4c16-a2df-5353f11e5191\") " pod="calico-system/goldmane-7c778bb748-mtpks" Dec 16 13:15:06.648343 kubelet[3348]: I1216 13:15:06.648334 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f45875d-2734-4fdc-b236-7b99e52c65eb-config-volume\") pod \"coredns-66bc5c9577-4bxdj\" (UID: \"1f45875d-2734-4fdc-b236-7b99e52c65eb\") " pod="kube-system/coredns-66bc5c9577-4bxdj" Dec 16 13:15:06.649141 kubelet[3348]: I1216 13:15:06.648358 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/283d7557-65a8-4b3b-9bfa-2489f569eafb-tigera-ca-bundle\") pod \"calico-kube-controllers-6cf79d7c7c-wlbjc\" (UID: \"283d7557-65a8-4b3b-9bfa-2489f569eafb\") " pod="calico-system/calico-kube-controllers-6cf79d7c7c-wlbjc" Dec 16 13:15:06.649141 kubelet[3348]: I1216 13:15:06.648383 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/97ebb483-74aa-4963-b528-353f8ea2fd10-calico-apiserver-certs\") pod \"calico-apiserver-5dbb4c8d86-dk448\" (UID: \"97ebb483-74aa-4963-b528-353f8ea2fd10\") " pod="calico-apiserver/calico-apiserver-5dbb4c8d86-dk448" Dec 16 13:15:06.649141 kubelet[3348]: I1216 13:15:06.648410 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjg48\" (UniqueName: \"kubernetes.io/projected/eb021275-6ff0-4073-a285-a44761b754c0-kube-api-access-pjg48\") pod \"whisker-79c5d89b7b-5j6pc\" (UID: \"eb021275-6ff0-4073-a285-a44761b754c0\") " pod="calico-system/whisker-79c5d89b7b-5j6pc" Dec 16 13:15:06.649141 kubelet[3348]: I1216 13:15:06.648431 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp9xs\" (UniqueName: \"kubernetes.io/projected/1f45875d-2734-4fdc-b236-7b99e52c65eb-kube-api-access-zp9xs\") pod \"coredns-66bc5c9577-4bxdj\" (UID: \"1f45875d-2734-4fdc-b236-7b99e52c65eb\") " pod="kube-system/coredns-66bc5c9577-4bxdj" Dec 16 13:15:06.649141 kubelet[3348]: I1216 13:15:06.648458 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9kj4\" (UniqueName: \"kubernetes.io/projected/849d73a2-70ae-4c16-a2df-5353f11e5191-kube-api-access-m9kj4\") pod \"goldmane-7c778bb748-mtpks\" (UID: \"849d73a2-70ae-4c16-a2df-5353f11e5191\") " pod="calico-system/goldmane-7c778bb748-mtpks" Dec 16 13:15:06.649351 kubelet[3348]: I1216 13:15:06.648479 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb021275-6ff0-4073-a285-a44761b754c0-whisker-ca-bundle\") pod \"whisker-79c5d89b7b-5j6pc\" (UID: \"eb021275-6ff0-4073-a285-a44761b754c0\") " pod="calico-system/whisker-79c5d89b7b-5j6pc" Dec 16 13:15:06.649351 kubelet[3348]: I1216 13:15:06.648503 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d1b7644f-3acf-411e-a5e8-2f3cc85e178b-calico-apiserver-certs\") pod \"calico-apiserver-5dbb4c8d86-x8rs5\" (UID: \"d1b7644f-3acf-411e-a5e8-2f3cc85e178b\") " pod="calico-apiserver/calico-apiserver-5dbb4c8d86-x8rs5" Dec 16 13:15:06.649351 kubelet[3348]: I1216 13:15:06.648540 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/849d73a2-70ae-4c16-a2df-5353f11e5191-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-mtpks\" (UID: \"849d73a2-70ae-4c16-a2df-5353f11e5191\") " pod="calico-system/goldmane-7c778bb748-mtpks" Dec 16 13:15:06.649531 systemd[1]: Created slice kubepods-besteffort-podd1b7644f_3acf_411e_a5e8_2f3cc85e178b.slice - libcontainer container kubepods-besteffort-podd1b7644f_3acf_411e_a5e8_2f3cc85e178b.slice. Dec 16 13:15:06.658241 systemd[1]: Created slice kubepods-burstable-pod1f45875d_2734_4fdc_b236_7b99e52c65eb.slice - libcontainer container kubepods-burstable-pod1f45875d_2734_4fdc_b236_7b99e52c65eb.slice. Dec 16 13:15:06.669832 systemd[1]: Created slice kubepods-besteffort-pod97ebb483_74aa_4963_b528_353f8ea2fd10.slice - libcontainer container kubepods-besteffort-pod97ebb483_74aa_4963_b528_353f8ea2fd10.slice. Dec 16 13:15:06.677376 systemd[1]: Created slice kubepods-besteffort-pod849d73a2_70ae_4c16_a2df_5353f11e5191.slice - libcontainer container kubepods-besteffort-pod849d73a2_70ae_4c16_a2df_5353f11e5191.slice. Dec 16 13:15:06.925614 containerd[1981]: time="2025-12-16T13:15:06.925482850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79c5d89b7b-5j6pc,Uid:eb021275-6ff0-4073-a285-a44761b754c0,Namespace:calico-system,Attempt:0,}" Dec 16 13:15:06.935189 containerd[1981]: time="2025-12-16T13:15:06.935147288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mpggz,Uid:5e30039f-976c-4f39-a91e-eac0996660a4,Namespace:kube-system,Attempt:0,}" Dec 16 13:15:06.950721 containerd[1981]: time="2025-12-16T13:15:06.950684747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cf79d7c7c-wlbjc,Uid:283d7557-65a8-4b3b-9bfa-2489f569eafb,Namespace:calico-system,Attempt:0,}" Dec 16 13:15:06.970617 containerd[1981]: time="2025-12-16T13:15:06.970389748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dbb4c8d86-x8rs5,Uid:d1b7644f-3acf-411e-a5e8-2f3cc85e178b,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:15:06.971341 containerd[1981]: time="2025-12-16T13:15:06.971319174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4bxdj,Uid:1f45875d-2734-4fdc-b236-7b99e52c65eb,Namespace:kube-system,Attempt:0,}" Dec 16 13:15:07.027339 containerd[1981]: time="2025-12-16T13:15:07.027282488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dbb4c8d86-dk448,Uid:97ebb483-74aa-4963-b528-353f8ea2fd10,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:15:07.037551 containerd[1981]: time="2025-12-16T13:15:07.028420676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-mtpks,Uid:849d73a2-70ae-4c16-a2df-5353f11e5191,Namespace:calico-system,Attempt:0,}" Dec 16 13:15:07.059887 containerd[1981]: time="2025-12-16T13:15:07.059819955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 13:15:07.360869 containerd[1981]: time="2025-12-16T13:15:07.360803837Z" level=error msg="Failed to destroy network for sandbox \"61220162ec276431fb04c39ce3b0db55184a7fc0b82202a1025e8942230c1d5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:15:07.366356 containerd[1981]: time="2025-12-16T13:15:07.366283321Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4bxdj,Uid:1f45875d-2734-4fdc-b236-7b99e52c65eb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"61220162ec276431fb04c39ce3b0db55184a7fc0b82202a1025e8942230c1d5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:15:07.366853 kubelet[3348]: E1216 13:15:07.366806 3348 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61220162ec276431fb04c39ce3b0db55184a7fc0b82202a1025e8942230c1d5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:15:07.367003 kubelet[3348]: E1216 13:15:07.366890 3348 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61220162ec276431fb04c39ce3b0db55184a7fc0b82202a1025e8942230c1d5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4bxdj" Dec 16 13:15:07.367003 kubelet[3348]: E1216 13:15:07.366919 3348 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61220162ec276431fb04c39ce3b0db55184a7fc0b82202a1025e8942230c1d5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4bxdj" Dec 16 13:15:07.367089 kubelet[3348]: E1216 13:15:07.366993 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-4bxdj_kube-system(1f45875d-2734-4fdc-b236-7b99e52c65eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-4bxdj_kube-system(1f45875d-2734-4fdc-b236-7b99e52c65eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61220162ec276431fb04c39ce3b0db55184a7fc0b82202a1025e8942230c1d5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-4bxdj" podUID="1f45875d-2734-4fdc-b236-7b99e52c65eb" Dec 16 13:15:07.369440 containerd[1981]: time="2025-12-16T13:15:07.369383668Z" level=error msg="Failed to destroy network for sandbox \"243d6cbd5d8aff257df473d50a8365c355e891ca1c9a376d922aceafd3e33718\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:15:07.371871 containerd[1981]: time="2025-12-16T13:15:07.371821720Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mpggz,Uid:5e30039f-976c-4f39-a91e-eac0996660a4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"243d6cbd5d8aff257df473d50a8365c355e891ca1c9a376d922aceafd3e33718\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:15:07.393260 kubelet[3348]: E1216 13:15:07.393188 3348 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"243d6cbd5d8aff257df473d50a8365c355e891ca1c9a376d922aceafd3e33718\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:15:07.393587 kubelet[3348]: E1216 13:15:07.393517 3348 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"243d6cbd5d8aff257df473d50a8365c355e891ca1c9a376d922aceafd3e33718\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-mpggz" Dec 16 13:15:07.393771 kubelet[3348]: E1216 13:15:07.393751 3348 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"243d6cbd5d8aff257df473d50a8365c355e891ca1c9a376d922aceafd3e33718\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-mpggz" Dec 16 13:15:07.394591 kubelet[3348]: E1216 13:15:07.393994 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-mpggz_kube-system(5e30039f-976c-4f39-a91e-eac0996660a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-mpggz_kube-system(5e30039f-976c-4f39-a91e-eac0996660a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"243d6cbd5d8aff257df473d50a8365c355e891ca1c9a376d922aceafd3e33718\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-mpggz" podUID="5e30039f-976c-4f39-a91e-eac0996660a4" Dec 16 13:15:07.430410 containerd[1981]: time="2025-12-16T13:15:07.430322838Z" level=error msg="Failed to destroy network for sandbox \"8241ed5514d33a8a30c1943b1b3acf0f8bc66c2d35301535e61cc460873ad526\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:15:07.438776 containerd[1981]: time="2025-12-16T13:15:07.437790230Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dbb4c8d86-dk448,Uid:97ebb483-74aa-4963-b528-353f8ea2fd10,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8241ed5514d33a8a30c1943b1b3acf0f8bc66c2d35301535e61cc460873ad526\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:15:07.438776 containerd[1981]: time="2025-12-16T13:15:07.437919338Z" level=error msg="Failed to destroy network for sandbox \"9fa1b1426240c58b88e476c16cbc74832491acbe83844d9b8ff4f438eac15e10\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:15:07.439015 kubelet[3348]: E1216 13:15:07.438112 3348 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8241ed5514d33a8a30c1943b1b3acf0f8bc66c2d35301535e61cc460873ad526\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:15:07.439015 kubelet[3348]: E1216 13:15:07.438165 3348 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8241ed5514d33a8a30c1943b1b3acf0f8bc66c2d35301535e61cc460873ad526\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dbb4c8d86-dk448" Dec 16 13:15:07.439015 kubelet[3348]: E1216 13:15:07.438184 3348 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8241ed5514d33a8a30c1943b1b3acf0f8bc66c2d35301535e61cc460873ad526\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dbb4c8d86-dk448" Dec 16 13:15:07.439174 kubelet[3348]: E1216 13:15:07.438251 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5dbb4c8d86-dk448_calico-apiserver(97ebb483-74aa-4963-b528-353f8ea2fd10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5dbb4c8d86-dk448_calico-apiserver(97ebb483-74aa-4963-b528-353f8ea2fd10)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8241ed5514d33a8a30c1943b1b3acf0f8bc66c2d35301535e61cc460873ad526\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dbb4c8d86-dk448" podUID="97ebb483-74aa-4963-b528-353f8ea2fd10" Dec 16 13:15:07.440418 containerd[1981]: time="2025-12-16T13:15:07.440276830Z" level=error msg="Failed to destroy network for sandbox \"1940850cc755004d8584c13c41eaed6ac2409f6965dc1f421ba9f092925a395d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:15:07.441876 containerd[1981]: time="2025-12-16T13:15:07.441733298Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-mtpks,Uid:849d73a2-70ae-4c16-a2df-5353f11e5191,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fa1b1426240c58b88e476c16cbc74832491acbe83844d9b8ff4f438eac15e10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:15:07.442052 kubelet[3348]: E1216 13:15:07.441979 3348 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fa1b1426240c58b88e476c16cbc74832491acbe83844d9b8ff4f438eac15e10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:15:07.442052 kubelet[3348]: E1216 13:15:07.442042 3348 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fa1b1426240c58b88e476c16cbc74832491acbe83844d9b8ff4f438eac15e10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-mtpks" Dec 16 13:15:07.443078 kubelet[3348]: E1216 13:15:07.442070 3348 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fa1b1426240c58b88e476c16cbc74832491acbe83844d9b8ff4f438eac15e10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-mtpks" Dec 16 13:15:07.443078 kubelet[3348]: E1216 13:15:07.442145 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-mtpks_calico-system(849d73a2-70ae-4c16-a2df-5353f11e5191)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-mtpks_calico-system(849d73a2-70ae-4c16-a2df-5353f11e5191)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9fa1b1426240c58b88e476c16cbc74832491acbe83844d9b8ff4f438eac15e10\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-mtpks" podUID="849d73a2-70ae-4c16-a2df-5353f11e5191" Dec 16 13:15:07.444694 containerd[1981]: time="2025-12-16T13:15:07.444648360Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79c5d89b7b-5j6pc,Uid:eb021275-6ff0-4073-a285-a44761b754c0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1940850cc755004d8584c13c41eaed6ac2409f6965dc1f421ba9f092925a395d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:15:07.446475 kubelet[3348]: E1216 13:15:07.444872 3348 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1940850cc755004d8584c13c41eaed6ac2409f6965dc1f421ba9f092925a395d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:15:07.446475 kubelet[3348]: E1216 13:15:07.444929 3348 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1940850cc755004d8584c13c41eaed6ac2409f6965dc1f421ba9f092925a395d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79c5d89b7b-5j6pc" Dec 16 13:15:07.446475 kubelet[3348]: E1216 13:15:07.444955 3348 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1940850cc755004d8584c13c41eaed6ac2409f6965dc1f421ba9f092925a395d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79c5d89b7b-5j6pc" Dec 16 13:15:07.446690 kubelet[3348]: E1216 13:15:07.445013 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-79c5d89b7b-5j6pc_calico-system(eb021275-6ff0-4073-a285-a44761b754c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-79c5d89b7b-5j6pc_calico-system(eb021275-6ff0-4073-a285-a44761b754c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1940850cc755004d8584c13c41eaed6ac2409f6965dc1f421ba9f092925a395d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-79c5d89b7b-5j6pc" podUID="eb021275-6ff0-4073-a285-a44761b754c0" Dec 16 13:15:07.448117 containerd[1981]: time="2025-12-16T13:15:07.448073573Z" level=error msg="Failed to destroy network for sandbox \"823da981c7bb111c3f14cfe8748d0e15d961631b3c9ee90cb2c3035a922e17ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:15:07.452294 containerd[1981]: time="2025-12-16T13:15:07.451785604Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cf79d7c7c-wlbjc,Uid:283d7557-65a8-4b3b-9bfa-2489f569eafb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"823da981c7bb111c3f14cfe8748d0e15d961631b3c9ee90cb2c3035a922e17ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:15:07.455551 kubelet[3348]: E1216 13:15:07.454151 3348 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"823da981c7bb111c3f14cfe8748d0e15d961631b3c9ee90cb2c3035a922e17ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:15:07.455551 kubelet[3348]: E1216 13:15:07.455535 3348 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"823da981c7bb111c3f14cfe8748d0e15d961631b3c9ee90cb2c3035a922e17ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cf79d7c7c-wlbjc" Dec 16 13:15:07.455551 kubelet[3348]: E1216 13:15:07.455577 3348 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"823da981c7bb111c3f14cfe8748d0e15d961631b3c9ee90cb2c3035a922e17ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cf79d7c7c-wlbjc" Dec 16 13:15:07.456685 kubelet[3348]: E1216 13:15:07.455650 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6cf79d7c7c-wlbjc_calico-system(283d7557-65a8-4b3b-9bfa-2489f569eafb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6cf79d7c7c-wlbjc_calico-system(283d7557-65a8-4b3b-9bfa-2489f569eafb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"823da981c7bb111c3f14cfe8748d0e15d961631b3c9ee90cb2c3035a922e17ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cf79d7c7c-wlbjc" podUID="283d7557-65a8-4b3b-9bfa-2489f569eafb" Dec 16 13:15:07.456767 containerd[1981]: time="2025-12-16T13:15:07.456035728Z" level=error msg="Failed to destroy network for sandbox \"f1cacff69c3b04d708ab7760e785ab159cdc73f60210ca38fa109d640afe6234\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:15:07.458711 containerd[1981]: time="2025-12-16T13:15:07.458537467Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dbb4c8d86-x8rs5,Uid:d1b7644f-3acf-411e-a5e8-2f3cc85e178b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1cacff69c3b04d708ab7760e785ab159cdc73f60210ca38fa109d640afe6234\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:15:07.459621 kubelet[3348]: E1216 13:15:07.459581 3348 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1cacff69c3b04d708ab7760e785ab159cdc73f60210ca38fa109d640afe6234\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:15:07.459732 kubelet[3348]: E1216 13:15:07.459642 3348 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1cacff69c3b04d708ab7760e785ab159cdc73f60210ca38fa109d640afe6234\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dbb4c8d86-x8rs5" Dec 16 13:15:07.459732 kubelet[3348]: E1216 13:15:07.459666 3348 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1cacff69c3b04d708ab7760e785ab159cdc73f60210ca38fa109d640afe6234\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dbb4c8d86-x8rs5" Dec 16 13:15:07.459822 kubelet[3348]: E1216 13:15:07.459731 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5dbb4c8d86-x8rs5_calico-apiserver(d1b7644f-3acf-411e-a5e8-2f3cc85e178b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5dbb4c8d86-x8rs5_calico-apiserver(d1b7644f-3acf-411e-a5e8-2f3cc85e178b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1cacff69c3b04d708ab7760e785ab159cdc73f60210ca38fa109d640afe6234\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dbb4c8d86-x8rs5" podUID="d1b7644f-3acf-411e-a5e8-2f3cc85e178b" Dec 16 13:15:08.485423 systemd[1]: Created slice kubepods-besteffort-podbb85ac3e_0aa1_45a4_b775_5a01ecf1dcb6.slice - libcontainer container kubepods-besteffort-podbb85ac3e_0aa1_45a4_b775_5a01ecf1dcb6.slice. Dec 16 13:15:08.491281 containerd[1981]: time="2025-12-16T13:15:08.491222224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cncts,Uid:bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6,Namespace:calico-system,Attempt:0,}" Dec 16 13:15:08.573860 containerd[1981]: time="2025-12-16T13:15:08.573805826Z" level=error msg="Failed to destroy network for sandbox \"00b208e0d4557a2b1ee3a215541a006045765a162fd542f93da8d39d773fba93\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:15:08.577668 systemd[1]: run-netns-cni\x2d09fb0957\x2dd3f5\x2d33f8\x2d19d3\x2d9c4408f554d6.mount: Deactivated successfully. Dec 16 13:15:08.578255 containerd[1981]: time="2025-12-16T13:15:08.578205450Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cncts,Uid:bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"00b208e0d4557a2b1ee3a215541a006045765a162fd542f93da8d39d773fba93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:15:08.580574 kubelet[3348]: E1216 13:15:08.578497 3348 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00b208e0d4557a2b1ee3a215541a006045765a162fd542f93da8d39d773fba93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 13:15:08.580574 kubelet[3348]: E1216 13:15:08.578635 3348 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00b208e0d4557a2b1ee3a215541a006045765a162fd542f93da8d39d773fba93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cncts" Dec 16 13:15:08.580574 kubelet[3348]: E1216 13:15:08.578666 3348 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00b208e0d4557a2b1ee3a215541a006045765a162fd542f93da8d39d773fba93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cncts" Dec 16 13:15:08.580989 kubelet[3348]: E1216 13:15:08.578734 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cncts_calico-system(bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cncts_calico-system(bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00b208e0d4557a2b1ee3a215541a006045765a162fd542f93da8d39d773fba93\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cncts" podUID="bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6" Dec 16 13:15:15.328371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4287389586.mount: Deactivated successfully. Dec 16 13:15:15.421774 containerd[1981]: time="2025-12-16T13:15:15.409664786Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:15:15.458659 containerd[1981]: time="2025-12-16T13:15:15.458604727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Dec 16 13:15:15.459661 containerd[1981]: time="2025-12-16T13:15:15.459623231Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:15:15.555144 containerd[1981]: time="2025-12-16T13:15:15.555098215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:15:15.561524 containerd[1981]: time="2025-12-16T13:15:15.561478203Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.497544668s" Dec 16 13:15:15.561524 containerd[1981]: time="2025-12-16T13:15:15.561524375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 13:15:15.607058 containerd[1981]: time="2025-12-16T13:15:15.606993985Z" level=info msg="CreateContainer within sandbox \"e928fedf34e30be4dc7c80a01a6d8dcc4962eb773cd22b249ee8a9a57fd700d3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 13:15:15.679729 containerd[1981]: time="2025-12-16T13:15:15.679608746Z" level=info msg="Container 90e9c6aae93d2b93c44052dee6100e0e863fbcabe99b2e650b7471af73971820: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:15:15.683703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount219722381.mount: Deactivated successfully. Dec 16 13:15:15.749257 containerd[1981]: time="2025-12-16T13:15:15.749206049Z" level=info msg="CreateContainer within sandbox \"e928fedf34e30be4dc7c80a01a6d8dcc4962eb773cd22b249ee8a9a57fd700d3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"90e9c6aae93d2b93c44052dee6100e0e863fbcabe99b2e650b7471af73971820\"" Dec 16 13:15:15.750263 containerd[1981]: time="2025-12-16T13:15:15.750230584Z" level=info msg="StartContainer for \"90e9c6aae93d2b93c44052dee6100e0e863fbcabe99b2e650b7471af73971820\"" Dec 16 13:15:15.755204 containerd[1981]: time="2025-12-16T13:15:15.755150727Z" level=info msg="connecting to shim 90e9c6aae93d2b93c44052dee6100e0e863fbcabe99b2e650b7471af73971820" address="unix:///run/containerd/s/209995473715b04fec5374fea3eb495d63c07a251a73b18a9294f7d499136e41" protocol=ttrpc version=3 Dec 16 13:15:15.923801 systemd[1]: Started cri-containerd-90e9c6aae93d2b93c44052dee6100e0e863fbcabe99b2e650b7471af73971820.scope - libcontainer container 90e9c6aae93d2b93c44052dee6100e0e863fbcabe99b2e650b7471af73971820. Dec 16 13:15:16.089417 containerd[1981]: time="2025-12-16T13:15:16.089247297Z" level=info msg="StartContainer for \"90e9c6aae93d2b93c44052dee6100e0e863fbcabe99b2e650b7471af73971820\" returns successfully" Dec 16 13:15:16.501943 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 13:15:16.503190 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 13:15:16.776496 kubelet[3348]: I1216 13:15:16.775654 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xm5gw" podStartSLOduration=2.314486489 podStartE2EDuration="22.775631493s" podCreationTimestamp="2025-12-16 13:14:54 +0000 UTC" firstStartedPulling="2025-12-16 13:14:55.1016621 +0000 UTC m=+23.863895060" lastFinishedPulling="2025-12-16 13:15:15.562807106 +0000 UTC m=+44.325040064" observedRunningTime="2025-12-16 13:15:16.149387449 +0000 UTC m=+44.911620450" watchObservedRunningTime="2025-12-16 13:15:16.775631493 +0000 UTC m=+45.537864476" Dec 16 13:15:16.836591 kubelet[3348]: I1216 13:15:16.835660 3348 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjg48\" (UniqueName: \"kubernetes.io/projected/eb021275-6ff0-4073-a285-a44761b754c0-kube-api-access-pjg48\") pod \"eb021275-6ff0-4073-a285-a44761b754c0\" (UID: \"eb021275-6ff0-4073-a285-a44761b754c0\") " Dec 16 13:15:16.837038 kubelet[3348]: I1216 13:15:16.836913 3348 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb021275-6ff0-4073-a285-a44761b754c0-whisker-ca-bundle\") pod \"eb021275-6ff0-4073-a285-a44761b754c0\" (UID: \"eb021275-6ff0-4073-a285-a44761b754c0\") " Dec 16 13:15:16.837038 kubelet[3348]: I1216 13:15:16.836970 3348 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eb021275-6ff0-4073-a285-a44761b754c0-whisker-backend-key-pair\") pod \"eb021275-6ff0-4073-a285-a44761b754c0\" (UID: \"eb021275-6ff0-4073-a285-a44761b754c0\") " Dec 16 13:15:16.860962 kubelet[3348]: I1216 13:15:16.860882 3348 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb021275-6ff0-4073-a285-a44761b754c0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "eb021275-6ff0-4073-a285-a44761b754c0" (UID: "eb021275-6ff0-4073-a285-a44761b754c0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 13:15:16.871089 kubelet[3348]: I1216 13:15:16.870866 3348 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb021275-6ff0-4073-a285-a44761b754c0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "eb021275-6ff0-4073-a285-a44761b754c0" (UID: "eb021275-6ff0-4073-a285-a44761b754c0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 13:15:16.873720 systemd[1]: var-lib-kubelet-pods-eb021275\x2d6ff0\x2d4073\x2da285\x2da44761b754c0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 13:15:16.874929 kubelet[3348]: I1216 13:15:16.874867 3348 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb021275-6ff0-4073-a285-a44761b754c0-kube-api-access-pjg48" (OuterVolumeSpecName: "kube-api-access-pjg48") pod "eb021275-6ff0-4073-a285-a44761b754c0" (UID: "eb021275-6ff0-4073-a285-a44761b754c0"). InnerVolumeSpecName "kube-api-access-pjg48". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 13:15:16.880580 systemd[1]: var-lib-kubelet-pods-eb021275\x2d6ff0\x2d4073\x2da285\x2da44761b754c0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpjg48.mount: Deactivated successfully. Dec 16 13:15:16.938301 kubelet[3348]: I1216 13:15:16.938260 3348 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb021275-6ff0-4073-a285-a44761b754c0-whisker-ca-bundle\") on node \"ip-172-31-28-249\" DevicePath \"\"" Dec 16 13:15:16.938301 kubelet[3348]: I1216 13:15:16.938293 3348 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eb021275-6ff0-4073-a285-a44761b754c0-whisker-backend-key-pair\") on node \"ip-172-31-28-249\" DevicePath \"\"" Dec 16 13:15:16.938301 kubelet[3348]: I1216 13:15:16.938304 3348 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pjg48\" (UniqueName: \"kubernetes.io/projected/eb021275-6ff0-4073-a285-a44761b754c0-kube-api-access-pjg48\") on node \"ip-172-31-28-249\" DevicePath \"\"" Dec 16 13:15:17.110399 systemd[1]: Removed slice kubepods-besteffort-podeb021275_6ff0_4073_a285_a44761b754c0.slice - libcontainer container kubepods-besteffort-podeb021275_6ff0_4073_a285_a44761b754c0.slice. Dec 16 13:15:17.226766 systemd[1]: Created slice kubepods-besteffort-pod63986b45_f828_491f_8283_58bdcda10705.slice - libcontainer container kubepods-besteffort-pod63986b45_f828_491f_8283_58bdcda10705.slice. Dec 16 13:15:17.340851 kubelet[3348]: I1216 13:15:17.340785 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nglf\" (UniqueName: \"kubernetes.io/projected/63986b45-f828-491f-8283-58bdcda10705-kube-api-access-7nglf\") pod \"whisker-fc58dbc98-4xrqt\" (UID: \"63986b45-f828-491f-8283-58bdcda10705\") " pod="calico-system/whisker-fc58dbc98-4xrqt" Dec 16 13:15:17.340851 kubelet[3348]: I1216 13:15:17.340840 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63986b45-f828-491f-8283-58bdcda10705-whisker-ca-bundle\") pod \"whisker-fc58dbc98-4xrqt\" (UID: \"63986b45-f828-491f-8283-58bdcda10705\") " pod="calico-system/whisker-fc58dbc98-4xrqt" Dec 16 13:15:17.340851 kubelet[3348]: I1216 13:15:17.340858 3348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/63986b45-f828-491f-8283-58bdcda10705-whisker-backend-key-pair\") pod \"whisker-fc58dbc98-4xrqt\" (UID: \"63986b45-f828-491f-8283-58bdcda10705\") " pod="calico-system/whisker-fc58dbc98-4xrqt" Dec 16 13:15:17.485269 kubelet[3348]: I1216 13:15:17.484897 3348 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb021275-6ff0-4073-a285-a44761b754c0" path="/var/lib/kubelet/pods/eb021275-6ff0-4073-a285-a44761b754c0/volumes" Dec 16 13:15:17.536857 containerd[1981]: time="2025-12-16T13:15:17.536810216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fc58dbc98-4xrqt,Uid:63986b45-f828-491f-8283-58bdcda10705,Namespace:calico-system,Attempt:0,}" Dec 16 13:15:18.072886 systemd-networkd[1850]: cali1d98b5924fb: Link UP Dec 16 13:15:18.073616 systemd-networkd[1850]: cali1d98b5924fb: Gained carrier Dec 16 13:15:18.074313 (udev-worker)[4613]: Network interface NamePolicy= disabled on kernel command line. Dec 16 13:15:18.104470 containerd[1981]: 2025-12-16 13:15:17.580 [INFO][4666] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:15:18.104470 containerd[1981]: 2025-12-16 13:15:17.640 [INFO][4666] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--249-k8s-whisker--fc58dbc98--4xrqt-eth0 whisker-fc58dbc98- calico-system 63986b45-f828-491f-8283-58bdcda10705 895 0 2025-12-16 13:15:17 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:fc58dbc98 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-28-249 whisker-fc58dbc98-4xrqt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1d98b5924fb [] [] }} ContainerID="9de1fbddeb4e51c21e87b608ac4eae4dcabbdd89872d9f86f3b47d7288650913" Namespace="calico-system" Pod="whisker-fc58dbc98-4xrqt" WorkloadEndpoint="ip--172--31--28--249-k8s-whisker--fc58dbc98--4xrqt-" Dec 16 13:15:18.104470 containerd[1981]: 2025-12-16 13:15:17.641 [INFO][4666] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9de1fbddeb4e51c21e87b608ac4eae4dcabbdd89872d9f86f3b47d7288650913" Namespace="calico-system" Pod="whisker-fc58dbc98-4xrqt" WorkloadEndpoint="ip--172--31--28--249-k8s-whisker--fc58dbc98--4xrqt-eth0" Dec 16 13:15:18.104470 containerd[1981]: 2025-12-16 13:15:17.967 [INFO][4677] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9de1fbddeb4e51c21e87b608ac4eae4dcabbdd89872d9f86f3b47d7288650913" HandleID="k8s-pod-network.9de1fbddeb4e51c21e87b608ac4eae4dcabbdd89872d9f86f3b47d7288650913" Workload="ip--172--31--28--249-k8s-whisker--fc58dbc98--4xrqt-eth0" Dec 16 13:15:18.104738 containerd[1981]: 2025-12-16 13:15:17.971 [INFO][4677] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9de1fbddeb4e51c21e87b608ac4eae4dcabbdd89872d9f86f3b47d7288650913" HandleID="k8s-pod-network.9de1fbddeb4e51c21e87b608ac4eae4dcabbdd89872d9f86f3b47d7288650913" Workload="ip--172--31--28--249-k8s-whisker--fc58dbc98--4xrqt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103880), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-249", "pod":"whisker-fc58dbc98-4xrqt", "timestamp":"2025-12-16 13:15:17.967856048 +0000 UTC"}, Hostname:"ip-172-31-28-249", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:15:18.104738 containerd[1981]: 2025-12-16 13:15:17.971 [INFO][4677] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:15:18.104738 containerd[1981]: 2025-12-16 13:15:17.972 [INFO][4677] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:15:18.104738 containerd[1981]: 2025-12-16 13:15:17.972 [INFO][4677] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-249' Dec 16 13:15:18.104738 containerd[1981]: 2025-12-16 13:15:18.001 [INFO][4677] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9de1fbddeb4e51c21e87b608ac4eae4dcabbdd89872d9f86f3b47d7288650913" host="ip-172-31-28-249" Dec 16 13:15:18.104738 containerd[1981]: 2025-12-16 13:15:18.018 [INFO][4677] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-249" Dec 16 13:15:18.104738 containerd[1981]: 2025-12-16 13:15:18.024 [INFO][4677] ipam/ipam.go 511: Trying affinity for 192.168.92.64/26 host="ip-172-31-28-249" Dec 16 13:15:18.104738 containerd[1981]: 2025-12-16 13:15:18.027 [INFO][4677] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.64/26 host="ip-172-31-28-249" Dec 16 13:15:18.104738 containerd[1981]: 2025-12-16 13:15:18.032 [INFO][4677] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.64/26 host="ip-172-31-28-249" Dec 16 13:15:18.104970 containerd[1981]: 2025-12-16 13:15:18.032 [INFO][4677] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.92.64/26 handle="k8s-pod-network.9de1fbddeb4e51c21e87b608ac4eae4dcabbdd89872d9f86f3b47d7288650913" host="ip-172-31-28-249" Dec 16 13:15:18.104970 containerd[1981]: 2025-12-16 13:15:18.033 [INFO][4677] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9de1fbddeb4e51c21e87b608ac4eae4dcabbdd89872d9f86f3b47d7288650913 Dec 16 13:15:18.104970 containerd[1981]: 2025-12-16 13:15:18.038 [INFO][4677] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.92.64/26 handle="k8s-pod-network.9de1fbddeb4e51c21e87b608ac4eae4dcabbdd89872d9f86f3b47d7288650913" host="ip-172-31-28-249" Dec 16 13:15:18.104970 containerd[1981]: 2025-12-16 13:15:18.049 [INFO][4677] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.92.65/26] block=192.168.92.64/26 handle="k8s-pod-network.9de1fbddeb4e51c21e87b608ac4eae4dcabbdd89872d9f86f3b47d7288650913" host="ip-172-31-28-249" Dec 16 13:15:18.104970 containerd[1981]: 2025-12-16 13:15:18.050 [INFO][4677] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.65/26] handle="k8s-pod-network.9de1fbddeb4e51c21e87b608ac4eae4dcabbdd89872d9f86f3b47d7288650913" host="ip-172-31-28-249" Dec 16 13:15:18.104970 containerd[1981]: 2025-12-16 13:15:18.050 [INFO][4677] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:15:18.104970 containerd[1981]: 2025-12-16 13:15:18.050 [INFO][4677] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.92.65/26] IPv6=[] ContainerID="9de1fbddeb4e51c21e87b608ac4eae4dcabbdd89872d9f86f3b47d7288650913" HandleID="k8s-pod-network.9de1fbddeb4e51c21e87b608ac4eae4dcabbdd89872d9f86f3b47d7288650913" Workload="ip--172--31--28--249-k8s-whisker--fc58dbc98--4xrqt-eth0" Dec 16 13:15:18.106874 containerd[1981]: 2025-12-16 13:15:18.053 [INFO][4666] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9de1fbddeb4e51c21e87b608ac4eae4dcabbdd89872d9f86f3b47d7288650913" Namespace="calico-system" Pod="whisker-fc58dbc98-4xrqt" WorkloadEndpoint="ip--172--31--28--249-k8s-whisker--fc58dbc98--4xrqt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--249-k8s-whisker--fc58dbc98--4xrqt-eth0", GenerateName:"whisker-fc58dbc98-", Namespace:"calico-system", SelfLink:"", UID:"63986b45-f828-491f-8283-58bdcda10705", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 15, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"fc58dbc98", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-249", ContainerID:"", Pod:"whisker-fc58dbc98-4xrqt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.92.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1d98b5924fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:15:18.106874 containerd[1981]: 2025-12-16 13:15:18.054 [INFO][4666] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.65/32] ContainerID="9de1fbddeb4e51c21e87b608ac4eae4dcabbdd89872d9f86f3b47d7288650913" Namespace="calico-system" Pod="whisker-fc58dbc98-4xrqt" WorkloadEndpoint="ip--172--31--28--249-k8s-whisker--fc58dbc98--4xrqt-eth0" Dec 16 13:15:18.106976 containerd[1981]: 2025-12-16 13:15:18.054 [INFO][4666] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d98b5924fb ContainerID="9de1fbddeb4e51c21e87b608ac4eae4dcabbdd89872d9f86f3b47d7288650913" Namespace="calico-system" Pod="whisker-fc58dbc98-4xrqt" WorkloadEndpoint="ip--172--31--28--249-k8s-whisker--fc58dbc98--4xrqt-eth0" Dec 16 13:15:18.106976 containerd[1981]: 2025-12-16 13:15:18.075 [INFO][4666] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9de1fbddeb4e51c21e87b608ac4eae4dcabbdd89872d9f86f3b47d7288650913" Namespace="calico-system" Pod="whisker-fc58dbc98-4xrqt" WorkloadEndpoint="ip--172--31--28--249-k8s-whisker--fc58dbc98--4xrqt-eth0" Dec 16 13:15:18.107032 containerd[1981]: 2025-12-16 13:15:18.076 [INFO][4666] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9de1fbddeb4e51c21e87b608ac4eae4dcabbdd89872d9f86f3b47d7288650913" Namespace="calico-system" Pod="whisker-fc58dbc98-4xrqt" WorkloadEndpoint="ip--172--31--28--249-k8s-whisker--fc58dbc98--4xrqt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--249-k8s-whisker--fc58dbc98--4xrqt-eth0", GenerateName:"whisker-fc58dbc98-", Namespace:"calico-system", SelfLink:"", UID:"63986b45-f828-491f-8283-58bdcda10705", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 15, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"fc58dbc98", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-249", ContainerID:"9de1fbddeb4e51c21e87b608ac4eae4dcabbdd89872d9f86f3b47d7288650913", Pod:"whisker-fc58dbc98-4xrqt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.92.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1d98b5924fb", MAC:"7a:0b:e1:9c:4f:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:15:18.107088 containerd[1981]: 2025-12-16 13:15:18.097 [INFO][4666] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9de1fbddeb4e51c21e87b608ac4eae4dcabbdd89872d9f86f3b47d7288650913" Namespace="calico-system" Pod="whisker-fc58dbc98-4xrqt" WorkloadEndpoint="ip--172--31--28--249-k8s-whisker--fc58dbc98--4xrqt-eth0" Dec 16 13:15:18.241591 containerd[1981]: time="2025-12-16T13:15:18.240675912Z" level=info msg="connecting to shim 9de1fbddeb4e51c21e87b608ac4eae4dcabbdd89872d9f86f3b47d7288650913" address="unix:///run/containerd/s/12b857514b7723bdfc34a2f6d2b9f781c4c8c581a33d591e8f6a2bb76d5ba350" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:15:18.300982 systemd[1]: Started cri-containerd-9de1fbddeb4e51c21e87b608ac4eae4dcabbdd89872d9f86f3b47d7288650913.scope - libcontainer container 9de1fbddeb4e51c21e87b608ac4eae4dcabbdd89872d9f86f3b47d7288650913. Dec 16 13:15:18.493586 containerd[1981]: time="2025-12-16T13:15:18.492642293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cf79d7c7c-wlbjc,Uid:283d7557-65a8-4b3b-9bfa-2489f569eafb,Namespace:calico-system,Attempt:0,}" Dec 16 13:15:18.501021 containerd[1981]: time="2025-12-16T13:15:18.500976646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fc58dbc98-4xrqt,Uid:63986b45-f828-491f-8283-58bdcda10705,Namespace:calico-system,Attempt:0,} returns sandbox id \"9de1fbddeb4e51c21e87b608ac4eae4dcabbdd89872d9f86f3b47d7288650913\"" Dec 16 13:15:18.507552 containerd[1981]: time="2025-12-16T13:15:18.505878689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:15:18.832500 containerd[1981]: time="2025-12-16T13:15:18.832372406Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:18.837395 containerd[1981]: time="2025-12-16T13:15:18.836004600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:15:18.851862 containerd[1981]: time="2025-12-16T13:15:18.851708207Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:15:18.852636 kubelet[3348]: E1216 13:15:18.852586 3348 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:15:18.854193 kubelet[3348]: E1216 13:15:18.853615 3348 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:15:18.854193 kubelet[3348]: E1216 13:15:18.853763 3348 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-fc58dbc98-4xrqt_calico-system(63986b45-f828-491f-8283-58bdcda10705): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:18.857807 containerd[1981]: time="2025-12-16T13:15:18.857762390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:15:18.891312 (udev-worker)[4612]: Network interface NamePolicy= disabled on kernel command line. Dec 16 13:15:18.895368 systemd-networkd[1850]: calid2c4a04bff6: Link UP Dec 16 13:15:18.896110 systemd-networkd[1850]: calid2c4a04bff6: Gained carrier Dec 16 13:15:18.943551 containerd[1981]: 2025-12-16 13:15:18.637 [INFO][4826] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 13:15:18.943551 containerd[1981]: 2025-12-16 13:15:18.661 [INFO][4826] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--249-k8s-calico--kube--controllers--6cf79d7c7c--wlbjc-eth0 calico-kube-controllers-6cf79d7c7c- calico-system 283d7557-65a8-4b3b-9bfa-2489f569eafb 820 0 2025-12-16 13:14:54 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6cf79d7c7c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-28-249 calico-kube-controllers-6cf79d7c7c-wlbjc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid2c4a04bff6 [] [] }} ContainerID="594145382e9763c78d0e2a06562770bdabb6637f472411d8af249bff76aacbe7" Namespace="calico-system" Pod="calico-kube-controllers-6cf79d7c7c-wlbjc" WorkloadEndpoint="ip--172--31--28--249-k8s-calico--kube--controllers--6cf79d7c7c--wlbjc-" Dec 16 13:15:18.943551 containerd[1981]: 2025-12-16 13:15:18.662 [INFO][4826] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="594145382e9763c78d0e2a06562770bdabb6637f472411d8af249bff76aacbe7" Namespace="calico-system" Pod="calico-kube-controllers-6cf79d7c7c-wlbjc" WorkloadEndpoint="ip--172--31--28--249-k8s-calico--kube--controllers--6cf79d7c7c--wlbjc-eth0" Dec 16 13:15:18.943551 containerd[1981]: 2025-12-16 13:15:18.752 [INFO][4848] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="594145382e9763c78d0e2a06562770bdabb6637f472411d8af249bff76aacbe7" HandleID="k8s-pod-network.594145382e9763c78d0e2a06562770bdabb6637f472411d8af249bff76aacbe7" Workload="ip--172--31--28--249-k8s-calico--kube--controllers--6cf79d7c7c--wlbjc-eth0" Dec 16 13:15:18.943905 containerd[1981]: 2025-12-16 13:15:18.752 [INFO][4848] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="594145382e9763c78d0e2a06562770bdabb6637f472411d8af249bff76aacbe7" HandleID="k8s-pod-network.594145382e9763c78d0e2a06562770bdabb6637f472411d8af249bff76aacbe7" Workload="ip--172--31--28--249-k8s-calico--kube--controllers--6cf79d7c7c--wlbjc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103a10), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-249", "pod":"calico-kube-controllers-6cf79d7c7c-wlbjc", "timestamp":"2025-12-16 13:15:18.752001192 +0000 UTC"}, Hostname:"ip-172-31-28-249", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:15:18.943905 containerd[1981]: 2025-12-16 13:15:18.752 [INFO][4848] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:15:18.943905 containerd[1981]: 2025-12-16 13:15:18.752 [INFO][4848] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:15:18.943905 containerd[1981]: 2025-12-16 13:15:18.753 [INFO][4848] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-249' Dec 16 13:15:18.943905 containerd[1981]: 2025-12-16 13:15:18.776 [INFO][4848] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.594145382e9763c78d0e2a06562770bdabb6637f472411d8af249bff76aacbe7" host="ip-172-31-28-249" Dec 16 13:15:18.943905 containerd[1981]: 2025-12-16 13:15:18.795 [INFO][4848] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-249" Dec 16 13:15:18.943905 containerd[1981]: 2025-12-16 13:15:18.807 [INFO][4848] ipam/ipam.go 511: Trying affinity for 192.168.92.64/26 host="ip-172-31-28-249" Dec 16 13:15:18.943905 containerd[1981]: 2025-12-16 13:15:18.816 [INFO][4848] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.64/26 host="ip-172-31-28-249" Dec 16 13:15:18.943905 containerd[1981]: 2025-12-16 13:15:18.825 [INFO][4848] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.64/26 host="ip-172-31-28-249" Dec 16 13:15:18.947605 containerd[1981]: 2025-12-16 13:15:18.825 [INFO][4848] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.92.64/26 handle="k8s-pod-network.594145382e9763c78d0e2a06562770bdabb6637f472411d8af249bff76aacbe7" host="ip-172-31-28-249" Dec 16 13:15:18.947605 containerd[1981]: 2025-12-16 13:15:18.828 [INFO][4848] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.594145382e9763c78d0e2a06562770bdabb6637f472411d8af249bff76aacbe7 Dec 16 13:15:18.947605 containerd[1981]: 2025-12-16 13:15:18.839 [INFO][4848] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.92.64/26 handle="k8s-pod-network.594145382e9763c78d0e2a06562770bdabb6637f472411d8af249bff76aacbe7" host="ip-172-31-28-249" Dec 16 13:15:18.947605 containerd[1981]: 2025-12-16 13:15:18.876 [INFO][4848] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.92.66/26] block=192.168.92.64/26 handle="k8s-pod-network.594145382e9763c78d0e2a06562770bdabb6637f472411d8af249bff76aacbe7" host="ip-172-31-28-249" Dec 16 13:15:18.947605 containerd[1981]: 2025-12-16 13:15:18.877 [INFO][4848] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.66/26] handle="k8s-pod-network.594145382e9763c78d0e2a06562770bdabb6637f472411d8af249bff76aacbe7" host="ip-172-31-28-249" Dec 16 13:15:18.947605 containerd[1981]: 2025-12-16 13:15:18.877 [INFO][4848] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:15:18.947605 containerd[1981]: 2025-12-16 13:15:18.877 [INFO][4848] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.92.66/26] IPv6=[] ContainerID="594145382e9763c78d0e2a06562770bdabb6637f472411d8af249bff76aacbe7" HandleID="k8s-pod-network.594145382e9763c78d0e2a06562770bdabb6637f472411d8af249bff76aacbe7" Workload="ip--172--31--28--249-k8s-calico--kube--controllers--6cf79d7c7c--wlbjc-eth0" Dec 16 13:15:18.947918 containerd[1981]: 2025-12-16 13:15:18.886 [INFO][4826] cni-plugin/k8s.go 418: Populated endpoint ContainerID="594145382e9763c78d0e2a06562770bdabb6637f472411d8af249bff76aacbe7" Namespace="calico-system" Pod="calico-kube-controllers-6cf79d7c7c-wlbjc" WorkloadEndpoint="ip--172--31--28--249-k8s-calico--kube--controllers--6cf79d7c7c--wlbjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--249-k8s-calico--kube--controllers--6cf79d7c7c--wlbjc-eth0", GenerateName:"calico-kube-controllers-6cf79d7c7c-", Namespace:"calico-system", SelfLink:"", UID:"283d7557-65a8-4b3b-9bfa-2489f569eafb", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 14, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cf79d7c7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-249", ContainerID:"", Pod:"calico-kube-controllers-6cf79d7c7c-wlbjc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.92.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid2c4a04bff6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:15:18.948732 containerd[1981]: 2025-12-16 13:15:18.887 [INFO][4826] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.66/32] ContainerID="594145382e9763c78d0e2a06562770bdabb6637f472411d8af249bff76aacbe7" Namespace="calico-system" Pod="calico-kube-controllers-6cf79d7c7c-wlbjc" WorkloadEndpoint="ip--172--31--28--249-k8s-calico--kube--controllers--6cf79d7c7c--wlbjc-eth0" Dec 16 13:15:18.948732 containerd[1981]: 2025-12-16 13:15:18.887 [INFO][4826] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid2c4a04bff6 ContainerID="594145382e9763c78d0e2a06562770bdabb6637f472411d8af249bff76aacbe7" Namespace="calico-system" Pod="calico-kube-controllers-6cf79d7c7c-wlbjc" WorkloadEndpoint="ip--172--31--28--249-k8s-calico--kube--controllers--6cf79d7c7c--wlbjc-eth0" Dec 16 13:15:18.948732 containerd[1981]: 2025-12-16 13:15:18.899 [INFO][4826] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="594145382e9763c78d0e2a06562770bdabb6637f472411d8af249bff76aacbe7" Namespace="calico-system" Pod="calico-kube-controllers-6cf79d7c7c-wlbjc" WorkloadEndpoint="ip--172--31--28--249-k8s-calico--kube--controllers--6cf79d7c7c--wlbjc-eth0" Dec 16 13:15:18.948871 containerd[1981]: 2025-12-16 13:15:18.900 [INFO][4826] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="594145382e9763c78d0e2a06562770bdabb6637f472411d8af249bff76aacbe7" Namespace="calico-system" Pod="calico-kube-controllers-6cf79d7c7c-wlbjc" WorkloadEndpoint="ip--172--31--28--249-k8s-calico--kube--controllers--6cf79d7c7c--wlbjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--249-k8s-calico--kube--controllers--6cf79d7c7c--wlbjc-eth0", GenerateName:"calico-kube-controllers-6cf79d7c7c-", Namespace:"calico-system", SelfLink:"", UID:"283d7557-65a8-4b3b-9bfa-2489f569eafb", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 14, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cf79d7c7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-249", ContainerID:"594145382e9763c78d0e2a06562770bdabb6637f472411d8af249bff76aacbe7", Pod:"calico-kube-controllers-6cf79d7c7c-wlbjc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.92.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid2c4a04bff6", MAC:"96:a3:0f:d1:56:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:15:18.948968 containerd[1981]: 2025-12-16 13:15:18.931 [INFO][4826] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="594145382e9763c78d0e2a06562770bdabb6637f472411d8af249bff76aacbe7" Namespace="calico-system" Pod="calico-kube-controllers-6cf79d7c7c-wlbjc" WorkloadEndpoint="ip--172--31--28--249-k8s-calico--kube--controllers--6cf79d7c7c--wlbjc-eth0" Dec 16 13:15:19.002051 containerd[1981]: time="2025-12-16T13:15:19.001992975Z" level=info msg="connecting to shim 594145382e9763c78d0e2a06562770bdabb6637f472411d8af249bff76aacbe7" address="unix:///run/containerd/s/1e5e895cc13fc86f72f03d5dec9eb2f53316a94d10e993bf5ee8fbe44e4df417" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:15:19.053111 systemd[1]: Started cri-containerd-594145382e9763c78d0e2a06562770bdabb6637f472411d8af249bff76aacbe7.scope - libcontainer container 594145382e9763c78d0e2a06562770bdabb6637f472411d8af249bff76aacbe7. Dec 16 13:15:19.162168 containerd[1981]: time="2025-12-16T13:15:19.162008349Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:19.164237 containerd[1981]: time="2025-12-16T13:15:19.164094021Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:15:19.164612 containerd[1981]: time="2025-12-16T13:15:19.164130812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:15:19.164930 kubelet[3348]: E1216 13:15:19.164890 3348 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:15:19.165264 kubelet[3348]: E1216 13:15:19.165107 3348 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:15:19.165548 kubelet[3348]: E1216 13:15:19.165473 3348 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-fc58dbc98-4xrqt_calico-system(63986b45-f828-491f-8283-58bdcda10705): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:19.166332 kubelet[3348]: E1216 13:15:19.166290 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-fc58dbc98-4xrqt" podUID="63986b45-f828-491f-8283-58bdcda10705" Dec 16 13:15:19.193350 containerd[1981]: time="2025-12-16T13:15:19.193219524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cf79d7c7c-wlbjc,Uid:283d7557-65a8-4b3b-9bfa-2489f569eafb,Namespace:calico-system,Attempt:0,} returns sandbox id \"594145382e9763c78d0e2a06562770bdabb6637f472411d8af249bff76aacbe7\"" Dec 16 13:15:19.197748 containerd[1981]: time="2025-12-16T13:15:19.197611869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:15:19.456315 containerd[1981]: time="2025-12-16T13:15:19.456034625Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:19.458550 containerd[1981]: time="2025-12-16T13:15:19.458488512Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:15:19.459506 containerd[1981]: time="2025-12-16T13:15:19.458703377Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:15:19.461595 kubelet[3348]: E1216 13:15:19.459790 3348 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:15:19.461595 kubelet[3348]: E1216 13:15:19.459847 3348 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:15:19.461595 kubelet[3348]: E1216 13:15:19.459942 3348 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6cf79d7c7c-wlbjc_calico-system(283d7557-65a8-4b3b-9bfa-2489f569eafb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:19.461595 kubelet[3348]: E1216 13:15:19.459982 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cf79d7c7c-wlbjc" podUID="283d7557-65a8-4b3b-9bfa-2489f569eafb" Dec 16 13:15:19.503551 systemd-networkd[1850]: cali1d98b5924fb: Gained IPv6LL Dec 16 13:15:19.612660 systemd-networkd[1850]: vxlan.calico: Link UP Dec 16 13:15:19.612672 systemd-networkd[1850]: vxlan.calico: Gained carrier Dec 16 13:15:20.116398 kubelet[3348]: E1216 13:15:20.116322 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cf79d7c7c-wlbjc" podUID="283d7557-65a8-4b3b-9bfa-2489f569eafb" Dec 16 13:15:20.118197 kubelet[3348]: E1216 13:15:20.117968 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-fc58dbc98-4xrqt" podUID="63986b45-f828-491f-8283-58bdcda10705" Dec 16 13:15:20.265987 systemd-networkd[1850]: calid2c4a04bff6: Gained IPv6LL Dec 16 13:15:20.486667 containerd[1981]: time="2025-12-16T13:15:20.486350283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dbb4c8d86-x8rs5,Uid:d1b7644f-3acf-411e-a5e8-2f3cc85e178b,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:15:20.490104 containerd[1981]: time="2025-12-16T13:15:20.489669338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-mtpks,Uid:849d73a2-70ae-4c16-a2df-5353f11e5191,Namespace:calico-system,Attempt:0,}" Dec 16 13:15:20.690788 systemd-networkd[1850]: cali5f703d26dc5: Link UP Dec 16 13:15:20.691816 systemd-networkd[1850]: cali5f703d26dc5: Gained carrier Dec 16 13:15:20.708886 containerd[1981]: 2025-12-16 13:15:20.577 [INFO][5005] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--249-k8s-calico--apiserver--5dbb4c8d86--x8rs5-eth0 calico-apiserver-5dbb4c8d86- calico-apiserver d1b7644f-3acf-411e-a5e8-2f3cc85e178b 821 0 2025-12-16 13:14:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5dbb4c8d86 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-249 calico-apiserver-5dbb4c8d86-x8rs5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5f703d26dc5 [] [] }} ContainerID="a842d78a15242a24dbb0524275cf0d3eb91e97eb04c58018f1d35bc66d8c3d36" Namespace="calico-apiserver" Pod="calico-apiserver-5dbb4c8d86-x8rs5" WorkloadEndpoint="ip--172--31--28--249-k8s-calico--apiserver--5dbb4c8d86--x8rs5-" Dec 16 13:15:20.708886 containerd[1981]: 2025-12-16 13:15:20.578 [INFO][5005] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a842d78a15242a24dbb0524275cf0d3eb91e97eb04c58018f1d35bc66d8c3d36" Namespace="calico-apiserver" Pod="calico-apiserver-5dbb4c8d86-x8rs5" WorkloadEndpoint="ip--172--31--28--249-k8s-calico--apiserver--5dbb4c8d86--x8rs5-eth0" Dec 16 13:15:20.708886 containerd[1981]: 2025-12-16 13:15:20.622 [INFO][5027] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a842d78a15242a24dbb0524275cf0d3eb91e97eb04c58018f1d35bc66d8c3d36" HandleID="k8s-pod-network.a842d78a15242a24dbb0524275cf0d3eb91e97eb04c58018f1d35bc66d8c3d36" Workload="ip--172--31--28--249-k8s-calico--apiserver--5dbb4c8d86--x8rs5-eth0" Dec 16 13:15:20.709101 containerd[1981]: 2025-12-16 13:15:20.623 [INFO][5027] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a842d78a15242a24dbb0524275cf0d3eb91e97eb04c58018f1d35bc66d8c3d36" HandleID="k8s-pod-network.a842d78a15242a24dbb0524275cf0d3eb91e97eb04c58018f1d35bc66d8c3d36" Workload="ip--172--31--28--249-k8s-calico--apiserver--5dbb4c8d86--x8rs5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf2d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-249", "pod":"calico-apiserver-5dbb4c8d86-x8rs5", "timestamp":"2025-12-16 13:15:20.622973004 +0000 UTC"}, Hostname:"ip-172-31-28-249", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:15:20.709101 containerd[1981]: 2025-12-16 13:15:20.623 [INFO][5027] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:15:20.709101 containerd[1981]: 2025-12-16 13:15:20.623 [INFO][5027] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:15:20.709101 containerd[1981]: 2025-12-16 13:15:20.623 [INFO][5027] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-249' Dec 16 13:15:20.709101 containerd[1981]: 2025-12-16 13:15:20.636 [INFO][5027] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a842d78a15242a24dbb0524275cf0d3eb91e97eb04c58018f1d35bc66d8c3d36" host="ip-172-31-28-249" Dec 16 13:15:20.709101 containerd[1981]: 2025-12-16 13:15:20.643 [INFO][5027] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-249" Dec 16 13:15:20.709101 containerd[1981]: 2025-12-16 13:15:20.655 [INFO][5027] ipam/ipam.go 511: Trying affinity for 192.168.92.64/26 host="ip-172-31-28-249" Dec 16 13:15:20.709101 containerd[1981]: 2025-12-16 13:15:20.662 [INFO][5027] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.64/26 host="ip-172-31-28-249" Dec 16 13:15:20.709101 containerd[1981]: 2025-12-16 13:15:20.664 [INFO][5027] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.64/26 host="ip-172-31-28-249" Dec 16 13:15:20.709323 containerd[1981]: 2025-12-16 13:15:20.665 [INFO][5027] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.92.64/26 handle="k8s-pod-network.a842d78a15242a24dbb0524275cf0d3eb91e97eb04c58018f1d35bc66d8c3d36" host="ip-172-31-28-249" Dec 16 13:15:20.709323 containerd[1981]: 2025-12-16 13:15:20.668 [INFO][5027] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a842d78a15242a24dbb0524275cf0d3eb91e97eb04c58018f1d35bc66d8c3d36 Dec 16 13:15:20.709323 containerd[1981]: 2025-12-16 13:15:20.675 [INFO][5027] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.92.64/26 handle="k8s-pod-network.a842d78a15242a24dbb0524275cf0d3eb91e97eb04c58018f1d35bc66d8c3d36" host="ip-172-31-28-249" Dec 16 13:15:20.709323 containerd[1981]: 2025-12-16 13:15:20.683 [INFO][5027] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.92.67/26] block=192.168.92.64/26 handle="k8s-pod-network.a842d78a15242a24dbb0524275cf0d3eb91e97eb04c58018f1d35bc66d8c3d36" host="ip-172-31-28-249" Dec 16 13:15:20.709323 containerd[1981]: 2025-12-16 13:15:20.683 [INFO][5027] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.67/26] handle="k8s-pod-network.a842d78a15242a24dbb0524275cf0d3eb91e97eb04c58018f1d35bc66d8c3d36" host="ip-172-31-28-249" Dec 16 13:15:20.709323 containerd[1981]: 2025-12-16 13:15:20.683 [INFO][5027] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:15:20.709323 containerd[1981]: 2025-12-16 13:15:20.683 [INFO][5027] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.92.67/26] IPv6=[] ContainerID="a842d78a15242a24dbb0524275cf0d3eb91e97eb04c58018f1d35bc66d8c3d36" HandleID="k8s-pod-network.a842d78a15242a24dbb0524275cf0d3eb91e97eb04c58018f1d35bc66d8c3d36" Workload="ip--172--31--28--249-k8s-calico--apiserver--5dbb4c8d86--x8rs5-eth0" Dec 16 13:15:20.709484 containerd[1981]: 2025-12-16 13:15:20.686 [INFO][5005] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a842d78a15242a24dbb0524275cf0d3eb91e97eb04c58018f1d35bc66d8c3d36" Namespace="calico-apiserver" Pod="calico-apiserver-5dbb4c8d86-x8rs5" WorkloadEndpoint="ip--172--31--28--249-k8s-calico--apiserver--5dbb4c8d86--x8rs5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--249-k8s-calico--apiserver--5dbb4c8d86--x8rs5-eth0", GenerateName:"calico-apiserver-5dbb4c8d86-", Namespace:"calico-apiserver", SelfLink:"", UID:"d1b7644f-3acf-411e-a5e8-2f3cc85e178b", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 14, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dbb4c8d86", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-249", ContainerID:"", Pod:"calico-apiserver-5dbb4c8d86-x8rs5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5f703d26dc5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:15:20.709541 containerd[1981]: 2025-12-16 13:15:20.686 [INFO][5005] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.67/32] ContainerID="a842d78a15242a24dbb0524275cf0d3eb91e97eb04c58018f1d35bc66d8c3d36" Namespace="calico-apiserver" Pod="calico-apiserver-5dbb4c8d86-x8rs5" WorkloadEndpoint="ip--172--31--28--249-k8s-calico--apiserver--5dbb4c8d86--x8rs5-eth0" Dec 16 13:15:20.709541 containerd[1981]: 2025-12-16 13:15:20.686 [INFO][5005] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5f703d26dc5 ContainerID="a842d78a15242a24dbb0524275cf0d3eb91e97eb04c58018f1d35bc66d8c3d36" Namespace="calico-apiserver" Pod="calico-apiserver-5dbb4c8d86-x8rs5" WorkloadEndpoint="ip--172--31--28--249-k8s-calico--apiserver--5dbb4c8d86--x8rs5-eth0" Dec 16 13:15:20.709541 containerd[1981]: 2025-12-16 13:15:20.690 [INFO][5005] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a842d78a15242a24dbb0524275cf0d3eb91e97eb04c58018f1d35bc66d8c3d36" Namespace="calico-apiserver" Pod="calico-apiserver-5dbb4c8d86-x8rs5" WorkloadEndpoint="ip--172--31--28--249-k8s-calico--apiserver--5dbb4c8d86--x8rs5-eth0" Dec 16 13:15:20.709634 containerd[1981]: 2025-12-16 13:15:20.690 [INFO][5005] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a842d78a15242a24dbb0524275cf0d3eb91e97eb04c58018f1d35bc66d8c3d36" Namespace="calico-apiserver" Pod="calico-apiserver-5dbb4c8d86-x8rs5" WorkloadEndpoint="ip--172--31--28--249-k8s-calico--apiserver--5dbb4c8d86--x8rs5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--249-k8s-calico--apiserver--5dbb4c8d86--x8rs5-eth0", GenerateName:"calico-apiserver-5dbb4c8d86-", Namespace:"calico-apiserver", SelfLink:"", UID:"d1b7644f-3acf-411e-a5e8-2f3cc85e178b", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 14, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dbb4c8d86", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-249", ContainerID:"a842d78a15242a24dbb0524275cf0d3eb91e97eb04c58018f1d35bc66d8c3d36", Pod:"calico-apiserver-5dbb4c8d86-x8rs5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5f703d26dc5", MAC:"92:2c:8d:6c:9f:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:15:20.712028 containerd[1981]: 2025-12-16 13:15:20.704 [INFO][5005] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a842d78a15242a24dbb0524275cf0d3eb91e97eb04c58018f1d35bc66d8c3d36" Namespace="calico-apiserver" Pod="calico-apiserver-5dbb4c8d86-x8rs5" WorkloadEndpoint="ip--172--31--28--249-k8s-calico--apiserver--5dbb4c8d86--x8rs5-eth0" Dec 16 13:15:20.779839 containerd[1981]: time="2025-12-16T13:15:20.778653857Z" level=info msg="connecting to shim a842d78a15242a24dbb0524275cf0d3eb91e97eb04c58018f1d35bc66d8c3d36" address="unix:///run/containerd/s/ec1bf674a9dd0982f6e1b8327fce38e93e81bf3a25225483d5f28f738abc700e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:15:20.841819 systemd[1]: Started cri-containerd-a842d78a15242a24dbb0524275cf0d3eb91e97eb04c58018f1d35bc66d8c3d36.scope - libcontainer container a842d78a15242a24dbb0524275cf0d3eb91e97eb04c58018f1d35bc66d8c3d36. Dec 16 13:15:20.854875 systemd-networkd[1850]: califba71c70843: Link UP Dec 16 13:15:20.859905 systemd-networkd[1850]: califba71c70843: Gained carrier Dec 16 13:15:20.898435 containerd[1981]: 2025-12-16 13:15:20.580 [INFO][5010] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--249-k8s-goldmane--7c778bb748--mtpks-eth0 goldmane-7c778bb748- calico-system 849d73a2-70ae-4c16-a2df-5353f11e5191 824 0 2025-12-16 13:14:52 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-28-249 goldmane-7c778bb748-mtpks eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] califba71c70843 [] [] }} ContainerID="8f9ef187afc1447d5910040779d6ff30216fb3d12d0e25425c98cb0afce68d90" Namespace="calico-system" Pod="goldmane-7c778bb748-mtpks" WorkloadEndpoint="ip--172--31--28--249-k8s-goldmane--7c778bb748--mtpks-" Dec 16 13:15:20.898435 containerd[1981]: 2025-12-16 13:15:20.580 [INFO][5010] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8f9ef187afc1447d5910040779d6ff30216fb3d12d0e25425c98cb0afce68d90" Namespace="calico-system" Pod="goldmane-7c778bb748-mtpks" WorkloadEndpoint="ip--172--31--28--249-k8s-goldmane--7c778bb748--mtpks-eth0" Dec 16 13:15:20.898435 containerd[1981]: 2025-12-16 13:15:20.644 [INFO][5029] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f9ef187afc1447d5910040779d6ff30216fb3d12d0e25425c98cb0afce68d90" HandleID="k8s-pod-network.8f9ef187afc1447d5910040779d6ff30216fb3d12d0e25425c98cb0afce68d90" Workload="ip--172--31--28--249-k8s-goldmane--7c778bb748--mtpks-eth0" Dec 16 13:15:20.898759 containerd[1981]: 2025-12-16 13:15:20.645 [INFO][5029] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8f9ef187afc1447d5910040779d6ff30216fb3d12d0e25425c98cb0afce68d90" HandleID="k8s-pod-network.8f9ef187afc1447d5910040779d6ff30216fb3d12d0e25425c98cb0afce68d90" Workload="ip--172--31--28--249-k8s-goldmane--7c778bb748--mtpks-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5d10), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-249", "pod":"goldmane-7c778bb748-mtpks", "timestamp":"2025-12-16 13:15:20.644488831 +0000 UTC"}, Hostname:"ip-172-31-28-249", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:15:20.898759 containerd[1981]: 2025-12-16 13:15:20.646 [INFO][5029] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:15:20.898759 containerd[1981]: 2025-12-16 13:15:20.684 [INFO][5029] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:15:20.898759 containerd[1981]: 2025-12-16 13:15:20.684 [INFO][5029] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-249' Dec 16 13:15:20.898759 containerd[1981]: 2025-12-16 13:15:20.735 [INFO][5029] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8f9ef187afc1447d5910040779d6ff30216fb3d12d0e25425c98cb0afce68d90" host="ip-172-31-28-249" Dec 16 13:15:20.898759 containerd[1981]: 2025-12-16 13:15:20.744 [INFO][5029] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-249" Dec 16 13:15:20.898759 containerd[1981]: 2025-12-16 13:15:20.757 [INFO][5029] ipam/ipam.go 511: Trying affinity for 192.168.92.64/26 host="ip-172-31-28-249" Dec 16 13:15:20.898759 containerd[1981]: 2025-12-16 13:15:20.767 [INFO][5029] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.64/26 host="ip-172-31-28-249" Dec 16 13:15:20.898759 containerd[1981]: 2025-12-16 13:15:20.772 [INFO][5029] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.64/26 host="ip-172-31-28-249" Dec 16 13:15:20.899137 containerd[1981]: 2025-12-16 13:15:20.775 [INFO][5029] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.92.64/26 handle="k8s-pod-network.8f9ef187afc1447d5910040779d6ff30216fb3d12d0e25425c98cb0afce68d90" host="ip-172-31-28-249" Dec 16 13:15:20.899137 containerd[1981]: 2025-12-16 13:15:20.782 [INFO][5029] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8f9ef187afc1447d5910040779d6ff30216fb3d12d0e25425c98cb0afce68d90 Dec 16 13:15:20.899137 containerd[1981]: 2025-12-16 13:15:20.798 [INFO][5029] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.92.64/26 handle="k8s-pod-network.8f9ef187afc1447d5910040779d6ff30216fb3d12d0e25425c98cb0afce68d90" host="ip-172-31-28-249" Dec 16 13:15:20.899137 containerd[1981]: 2025-12-16 13:15:20.815 [INFO][5029] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.92.68/26] block=192.168.92.64/26 handle="k8s-pod-network.8f9ef187afc1447d5910040779d6ff30216fb3d12d0e25425c98cb0afce68d90" host="ip-172-31-28-249" Dec 16 13:15:20.899137 containerd[1981]: 2025-12-16 13:15:20.815 [INFO][5029] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.68/26] handle="k8s-pod-network.8f9ef187afc1447d5910040779d6ff30216fb3d12d0e25425c98cb0afce68d90" host="ip-172-31-28-249" Dec 16 13:15:20.899137 containerd[1981]: 2025-12-16 13:15:20.816 [INFO][5029] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:15:20.899137 containerd[1981]: 2025-12-16 13:15:20.816 [INFO][5029] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.92.68/26] IPv6=[] ContainerID="8f9ef187afc1447d5910040779d6ff30216fb3d12d0e25425c98cb0afce68d90" HandleID="k8s-pod-network.8f9ef187afc1447d5910040779d6ff30216fb3d12d0e25425c98cb0afce68d90" Workload="ip--172--31--28--249-k8s-goldmane--7c778bb748--mtpks-eth0" Dec 16 13:15:20.899407 containerd[1981]: 2025-12-16 13:15:20.824 [INFO][5010] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8f9ef187afc1447d5910040779d6ff30216fb3d12d0e25425c98cb0afce68d90" Namespace="calico-system" Pod="goldmane-7c778bb748-mtpks" WorkloadEndpoint="ip--172--31--28--249-k8s-goldmane--7c778bb748--mtpks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--249-k8s-goldmane--7c778bb748--mtpks-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"849d73a2-70ae-4c16-a2df-5353f11e5191", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-249", ContainerID:"", Pod:"goldmane-7c778bb748-mtpks", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.92.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califba71c70843", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:15:20.899407 containerd[1981]: 2025-12-16 13:15:20.825 [INFO][5010] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.68/32] ContainerID="8f9ef187afc1447d5910040779d6ff30216fb3d12d0e25425c98cb0afce68d90" Namespace="calico-system" Pod="goldmane-7c778bb748-mtpks" WorkloadEndpoint="ip--172--31--28--249-k8s-goldmane--7c778bb748--mtpks-eth0" Dec 16 13:15:20.899571 containerd[1981]: 2025-12-16 13:15:20.826 [INFO][5010] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califba71c70843 ContainerID="8f9ef187afc1447d5910040779d6ff30216fb3d12d0e25425c98cb0afce68d90" Namespace="calico-system" Pod="goldmane-7c778bb748-mtpks" WorkloadEndpoint="ip--172--31--28--249-k8s-goldmane--7c778bb748--mtpks-eth0" Dec 16 13:15:20.899571 containerd[1981]: 2025-12-16 13:15:20.865 [INFO][5010] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f9ef187afc1447d5910040779d6ff30216fb3d12d0e25425c98cb0afce68d90" Namespace="calico-system" Pod="goldmane-7c778bb748-mtpks" WorkloadEndpoint="ip--172--31--28--249-k8s-goldmane--7c778bb748--mtpks-eth0" Dec 16 13:15:20.899659 containerd[1981]: 2025-12-16 13:15:20.871 [INFO][5010] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8f9ef187afc1447d5910040779d6ff30216fb3d12d0e25425c98cb0afce68d90" Namespace="calico-system" Pod="goldmane-7c778bb748-mtpks" WorkloadEndpoint="ip--172--31--28--249-k8s-goldmane--7c778bb748--mtpks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--249-k8s-goldmane--7c778bb748--mtpks-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"849d73a2-70ae-4c16-a2df-5353f11e5191", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 14, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-249", ContainerID:"8f9ef187afc1447d5910040779d6ff30216fb3d12d0e25425c98cb0afce68d90", Pod:"goldmane-7c778bb748-mtpks", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.92.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califba71c70843", MAC:"76:ab:98:24:3f:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:15:20.899762 containerd[1981]: 2025-12-16 13:15:20.892 [INFO][5010] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8f9ef187afc1447d5910040779d6ff30216fb3d12d0e25425c98cb0afce68d90" Namespace="calico-system" Pod="goldmane-7c778bb748-mtpks" WorkloadEndpoint="ip--172--31--28--249-k8s-goldmane--7c778bb748--mtpks-eth0" Dec 16 13:15:20.956847 containerd[1981]: time="2025-12-16T13:15:20.956793544Z" level=info msg="connecting to shim 8f9ef187afc1447d5910040779d6ff30216fb3d12d0e25425c98cb0afce68d90" address="unix:///run/containerd/s/57ba743aa9f0b6117b33d4df3e21267790cc6a841f34dea376bf3958b9afdfe6" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:15:20.977315 containerd[1981]: time="2025-12-16T13:15:20.977215792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dbb4c8d86-x8rs5,Uid:d1b7644f-3acf-411e-a5e8-2f3cc85e178b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a842d78a15242a24dbb0524275cf0d3eb91e97eb04c58018f1d35bc66d8c3d36\"" Dec 16 13:15:20.981019 containerd[1981]: time="2025-12-16T13:15:20.980951472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:15:21.005871 systemd[1]: Started cri-containerd-8f9ef187afc1447d5910040779d6ff30216fb3d12d0e25425c98cb0afce68d90.scope - libcontainer container 8f9ef187afc1447d5910040779d6ff30216fb3d12d0e25425c98cb0afce68d90. Dec 16 13:15:21.033802 systemd-networkd[1850]: vxlan.calico: Gained IPv6LL Dec 16 13:15:21.092688 containerd[1981]: time="2025-12-16T13:15:21.092626886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-mtpks,Uid:849d73a2-70ae-4c16-a2df-5353f11e5191,Namespace:calico-system,Attempt:0,} returns sandbox id \"8f9ef187afc1447d5910040779d6ff30216fb3d12d0e25425c98cb0afce68d90\"" Dec 16 13:15:21.125840 kubelet[3348]: E1216 13:15:21.125780 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cf79d7c7c-wlbjc" podUID="283d7557-65a8-4b3b-9bfa-2489f569eafb" Dec 16 13:15:21.251468 containerd[1981]: time="2025-12-16T13:15:21.251334775Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:21.254216 containerd[1981]: time="2025-12-16T13:15:21.254054703Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:15:21.254382 containerd[1981]: time="2025-12-16T13:15:21.254116473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:15:21.255097 kubelet[3348]: E1216 13:15:21.255051 3348 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:15:21.255288 kubelet[3348]: E1216 13:15:21.255244 3348 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:15:21.255636 kubelet[3348]: E1216 13:15:21.255613 3348 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5dbb4c8d86-x8rs5_calico-apiserver(d1b7644f-3acf-411e-a5e8-2f3cc85e178b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:21.255944 containerd[1981]: time="2025-12-16T13:15:21.255920505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:15:21.256884 kubelet[3348]: E1216 13:15:21.256805 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dbb4c8d86-x8rs5" podUID="d1b7644f-3acf-411e-a5e8-2f3cc85e178b" Dec 16 13:15:21.485075 containerd[1981]: time="2025-12-16T13:15:21.484865389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4bxdj,Uid:1f45875d-2734-4fdc-b236-7b99e52c65eb,Namespace:kube-system,Attempt:0,}" Dec 16 13:15:21.490878 containerd[1981]: time="2025-12-16T13:15:21.490803361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mpggz,Uid:5e30039f-976c-4f39-a91e-eac0996660a4,Namespace:kube-system,Attempt:0,}" Dec 16 13:15:21.500129 containerd[1981]: time="2025-12-16T13:15:21.499524128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cncts,Uid:bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6,Namespace:calico-system,Attempt:0,}" Dec 16 13:15:21.510606 containerd[1981]: time="2025-12-16T13:15:21.505103346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dbb4c8d86-dk448,Uid:97ebb483-74aa-4963-b528-353f8ea2fd10,Namespace:calico-apiserver,Attempt:0,}" Dec 16 13:15:21.552751 containerd[1981]: time="2025-12-16T13:15:21.552694462Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:21.559588 containerd[1981]: time="2025-12-16T13:15:21.557618673Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:15:21.561960 containerd[1981]: time="2025-12-16T13:15:21.560781599Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:15:21.565577 kubelet[3348]: E1216 13:15:21.565477 3348 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:15:21.565577 kubelet[3348]: E1216 13:15:21.565527 3348 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:15:21.566462 kubelet[3348]: E1216 13:15:21.566234 3348 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-mtpks_calico-system(849d73a2-70ae-4c16-a2df-5353f11e5191): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:21.567270 kubelet[3348]: E1216 13:15:21.567157 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mtpks" podUID="849d73a2-70ae-4c16-a2df-5353f11e5191" Dec 16 13:15:21.857855 systemd-networkd[1850]: calibebc517ac9b: Link UP Dec 16 13:15:21.860379 systemd-networkd[1850]: calibebc517ac9b: Gained carrier Dec 16 13:15:21.884308 containerd[1981]: 2025-12-16 13:15:21.673 [INFO][5174] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--249-k8s-calico--apiserver--5dbb4c8d86--dk448-eth0 calico-apiserver-5dbb4c8d86- calico-apiserver 97ebb483-74aa-4963-b528-353f8ea2fd10 823 0 2025-12-16 13:14:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5dbb4c8d86 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-249 calico-apiserver-5dbb4c8d86-dk448 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibebc517ac9b [] [] }} ContainerID="0e258f794868be84d1c0cfc23a34813a21ee07d854e638e82c79c4a683e51037" Namespace="calico-apiserver" Pod="calico-apiserver-5dbb4c8d86-dk448" WorkloadEndpoint="ip--172--31--28--249-k8s-calico--apiserver--5dbb4c8d86--dk448-" Dec 16 13:15:21.884308 containerd[1981]: 2025-12-16 13:15:21.673 [INFO][5174] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0e258f794868be84d1c0cfc23a34813a21ee07d854e638e82c79c4a683e51037" Namespace="calico-apiserver" Pod="calico-apiserver-5dbb4c8d86-dk448" WorkloadEndpoint="ip--172--31--28--249-k8s-calico--apiserver--5dbb4c8d86--dk448-eth0" Dec 16 13:15:21.884308 containerd[1981]: 2025-12-16 13:15:21.778 [INFO][5207] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0e258f794868be84d1c0cfc23a34813a21ee07d854e638e82c79c4a683e51037" HandleID="k8s-pod-network.0e258f794868be84d1c0cfc23a34813a21ee07d854e638e82c79c4a683e51037" Workload="ip--172--31--28--249-k8s-calico--apiserver--5dbb4c8d86--dk448-eth0" Dec 16 13:15:21.884779 containerd[1981]: 2025-12-16 13:15:21.779 [INFO][5207] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0e258f794868be84d1c0cfc23a34813a21ee07d854e638e82c79c4a683e51037" HandleID="k8s-pod-network.0e258f794868be84d1c0cfc23a34813a21ee07d854e638e82c79c4a683e51037" Workload="ip--172--31--28--249-k8s-calico--apiserver--5dbb4c8d86--dk448-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003413a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-249", "pod":"calico-apiserver-5dbb4c8d86-dk448", "timestamp":"2025-12-16 13:15:21.778872699 +0000 UTC"}, Hostname:"ip-172-31-28-249", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:15:21.884779 containerd[1981]: 2025-12-16 13:15:21.779 [INFO][5207] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:15:21.884779 containerd[1981]: 2025-12-16 13:15:21.779 [INFO][5207] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:15:21.884779 containerd[1981]: 2025-12-16 13:15:21.779 [INFO][5207] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-249' Dec 16 13:15:21.884779 containerd[1981]: 2025-12-16 13:15:21.798 [INFO][5207] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0e258f794868be84d1c0cfc23a34813a21ee07d854e638e82c79c4a683e51037" host="ip-172-31-28-249" Dec 16 13:15:21.884779 containerd[1981]: 2025-12-16 13:15:21.808 [INFO][5207] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-249" Dec 16 13:15:21.884779 containerd[1981]: 2025-12-16 13:15:21.817 [INFO][5207] ipam/ipam.go 511: Trying affinity for 192.168.92.64/26 host="ip-172-31-28-249" Dec 16 13:15:21.884779 containerd[1981]: 2025-12-16 13:15:21.821 [INFO][5207] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.64/26 host="ip-172-31-28-249" Dec 16 13:15:21.884779 containerd[1981]: 2025-12-16 13:15:21.825 [INFO][5207] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.64/26 host="ip-172-31-28-249" Dec 16 13:15:21.885058 containerd[1981]: 2025-12-16 13:15:21.825 [INFO][5207] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.92.64/26 handle="k8s-pod-network.0e258f794868be84d1c0cfc23a34813a21ee07d854e638e82c79c4a683e51037" host="ip-172-31-28-249" Dec 16 13:15:21.885058 containerd[1981]: 2025-12-16 13:15:21.827 [INFO][5207] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0e258f794868be84d1c0cfc23a34813a21ee07d854e638e82c79c4a683e51037 Dec 16 13:15:21.885058 containerd[1981]: 2025-12-16 13:15:21.832 [INFO][5207] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.92.64/26 handle="k8s-pod-network.0e258f794868be84d1c0cfc23a34813a21ee07d854e638e82c79c4a683e51037" host="ip-172-31-28-249" Dec 16 13:15:21.885058 containerd[1981]: 2025-12-16 13:15:21.841 [INFO][5207] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.92.69/26] block=192.168.92.64/26 handle="k8s-pod-network.0e258f794868be84d1c0cfc23a34813a21ee07d854e638e82c79c4a683e51037" host="ip-172-31-28-249" Dec 16 13:15:21.885058 containerd[1981]: 2025-12-16 13:15:21.841 [INFO][5207] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.69/26] handle="k8s-pod-network.0e258f794868be84d1c0cfc23a34813a21ee07d854e638e82c79c4a683e51037" host="ip-172-31-28-249" Dec 16 13:15:21.885058 containerd[1981]: 2025-12-16 13:15:21.841 [INFO][5207] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:15:21.885058 containerd[1981]: 2025-12-16 13:15:21.841 [INFO][5207] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.92.69/26] IPv6=[] ContainerID="0e258f794868be84d1c0cfc23a34813a21ee07d854e638e82c79c4a683e51037" HandleID="k8s-pod-network.0e258f794868be84d1c0cfc23a34813a21ee07d854e638e82c79c4a683e51037" Workload="ip--172--31--28--249-k8s-calico--apiserver--5dbb4c8d86--dk448-eth0" Dec 16 13:15:21.885227 containerd[1981]: 2025-12-16 13:15:21.847 [INFO][5174] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0e258f794868be84d1c0cfc23a34813a21ee07d854e638e82c79c4a683e51037" Namespace="calico-apiserver" Pod="calico-apiserver-5dbb4c8d86-dk448" WorkloadEndpoint="ip--172--31--28--249-k8s-calico--apiserver--5dbb4c8d86--dk448-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--249-k8s-calico--apiserver--5dbb4c8d86--dk448-eth0", GenerateName:"calico-apiserver-5dbb4c8d86-", Namespace:"calico-apiserver", SelfLink:"", UID:"97ebb483-74aa-4963-b528-353f8ea2fd10", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 14, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dbb4c8d86", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-249", ContainerID:"", Pod:"calico-apiserver-5dbb4c8d86-dk448", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibebc517ac9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:15:21.885290 containerd[1981]: 2025-12-16 13:15:21.847 [INFO][5174] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.69/32] ContainerID="0e258f794868be84d1c0cfc23a34813a21ee07d854e638e82c79c4a683e51037" Namespace="calico-apiserver" Pod="calico-apiserver-5dbb4c8d86-dk448" WorkloadEndpoint="ip--172--31--28--249-k8s-calico--apiserver--5dbb4c8d86--dk448-eth0" Dec 16 13:15:21.885290 containerd[1981]: 2025-12-16 13:15:21.847 [INFO][5174] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibebc517ac9b ContainerID="0e258f794868be84d1c0cfc23a34813a21ee07d854e638e82c79c4a683e51037" Namespace="calico-apiserver" Pod="calico-apiserver-5dbb4c8d86-dk448" WorkloadEndpoint="ip--172--31--28--249-k8s-calico--apiserver--5dbb4c8d86--dk448-eth0" Dec 16 13:15:21.885290 containerd[1981]: 2025-12-16 13:15:21.862 [INFO][5174] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0e258f794868be84d1c0cfc23a34813a21ee07d854e638e82c79c4a683e51037" Namespace="calico-apiserver" Pod="calico-apiserver-5dbb4c8d86-dk448" WorkloadEndpoint="ip--172--31--28--249-k8s-calico--apiserver--5dbb4c8d86--dk448-eth0" Dec 16 13:15:21.885366 containerd[1981]: 2025-12-16 13:15:21.863 [INFO][5174] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0e258f794868be84d1c0cfc23a34813a21ee07d854e638e82c79c4a683e51037" Namespace="calico-apiserver" Pod="calico-apiserver-5dbb4c8d86-dk448" WorkloadEndpoint="ip--172--31--28--249-k8s-calico--apiserver--5dbb4c8d86--dk448-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--249-k8s-calico--apiserver--5dbb4c8d86--dk448-eth0", GenerateName:"calico-apiserver-5dbb4c8d86-", Namespace:"calico-apiserver", SelfLink:"", UID:"97ebb483-74aa-4963-b528-353f8ea2fd10", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 14, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dbb4c8d86", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-249", ContainerID:"0e258f794868be84d1c0cfc23a34813a21ee07d854e638e82c79c4a683e51037", Pod:"calico-apiserver-5dbb4c8d86-dk448", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibebc517ac9b", MAC:"06:52:26:9e:8b:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:15:21.886093 containerd[1981]: 2025-12-16 13:15:21.881 [INFO][5174] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0e258f794868be84d1c0cfc23a34813a21ee07d854e638e82c79c4a683e51037" Namespace="calico-apiserver" Pod="calico-apiserver-5dbb4c8d86-dk448" WorkloadEndpoint="ip--172--31--28--249-k8s-calico--apiserver--5dbb4c8d86--dk448-eth0" Dec 16 13:15:21.926797 containerd[1981]: time="2025-12-16T13:15:21.926725341Z" level=info msg="connecting to shim 0e258f794868be84d1c0cfc23a34813a21ee07d854e638e82c79c4a683e51037" address="unix:///run/containerd/s/2462491e0aebae3be65ba49cd49b4468ff0fa03bb188c80f6fb659f1bc7ff7a5" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:15:21.966034 systemd-networkd[1850]: califf4abec96b1: Link UP Dec 16 13:15:21.967154 systemd-networkd[1850]: califf4abec96b1: Gained carrier Dec 16 13:15:21.981872 systemd[1]: Started cri-containerd-0e258f794868be84d1c0cfc23a34813a21ee07d854e638e82c79c4a683e51037.scope - libcontainer container 0e258f794868be84d1c0cfc23a34813a21ee07d854e638e82c79c4a683e51037. Dec 16 13:15:21.993999 systemd-networkd[1850]: cali5f703d26dc5: Gained IPv6LL Dec 16 13:15:21.996971 containerd[1981]: 2025-12-16 13:15:21.707 [INFO][5166] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--249-k8s-coredns--66bc5c9577--4bxdj-eth0 coredns-66bc5c9577- kube-system 1f45875d-2734-4fdc-b236-7b99e52c65eb 822 0 2025-12-16 13:14:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-249 coredns-66bc5c9577-4bxdj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califf4abec96b1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="3f822bbd61c414164b7a34f47c02910821f5f3aba34cf78f0db335c653ec9882" Namespace="kube-system" Pod="coredns-66bc5c9577-4bxdj" WorkloadEndpoint="ip--172--31--28--249-k8s-coredns--66bc5c9577--4bxdj-" Dec 16 13:15:21.996971 containerd[1981]: 2025-12-16 13:15:21.707 [INFO][5166] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3f822bbd61c414164b7a34f47c02910821f5f3aba34cf78f0db335c653ec9882" Namespace="kube-system" Pod="coredns-66bc5c9577-4bxdj" WorkloadEndpoint="ip--172--31--28--249-k8s-coredns--66bc5c9577--4bxdj-eth0" Dec 16 13:15:21.996971 containerd[1981]: 2025-12-16 13:15:21.805 [INFO][5214] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f822bbd61c414164b7a34f47c02910821f5f3aba34cf78f0db335c653ec9882" HandleID="k8s-pod-network.3f822bbd61c414164b7a34f47c02910821f5f3aba34cf78f0db335c653ec9882" Workload="ip--172--31--28--249-k8s-coredns--66bc5c9577--4bxdj-eth0" Dec 16 13:15:21.997186 containerd[1981]: 2025-12-16 13:15:21.805 [INFO][5214] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3f822bbd61c414164b7a34f47c02910821f5f3aba34cf78f0db335c653ec9882" HandleID="k8s-pod-network.3f822bbd61c414164b7a34f47c02910821f5f3aba34cf78f0db335c653ec9882" Workload="ip--172--31--28--249-k8s-coredns--66bc5c9577--4bxdj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d7de0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-249", "pod":"coredns-66bc5c9577-4bxdj", "timestamp":"2025-12-16 13:15:21.805256214 +0000 UTC"}, Hostname:"ip-172-31-28-249", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:15:21.997186 containerd[1981]: 2025-12-16 13:15:21.805 [INFO][5214] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:15:21.997186 containerd[1981]: 2025-12-16 13:15:21.842 [INFO][5214] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:15:21.997186 containerd[1981]: 2025-12-16 13:15:21.842 [INFO][5214] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-249' Dec 16 13:15:21.997186 containerd[1981]: 2025-12-16 13:15:21.899 [INFO][5214] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3f822bbd61c414164b7a34f47c02910821f5f3aba34cf78f0db335c653ec9882" host="ip-172-31-28-249" Dec 16 13:15:21.997186 containerd[1981]: 2025-12-16 13:15:21.907 [INFO][5214] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-249" Dec 16 13:15:21.997186 containerd[1981]: 2025-12-16 13:15:21.915 [INFO][5214] ipam/ipam.go 511: Trying affinity for 192.168.92.64/26 host="ip-172-31-28-249" Dec 16 13:15:21.997186 containerd[1981]: 2025-12-16 13:15:21.920 [INFO][5214] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.64/26 host="ip-172-31-28-249" Dec 16 13:15:21.997186 containerd[1981]: 2025-12-16 13:15:21.925 [INFO][5214] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.64/26 host="ip-172-31-28-249" Dec 16 13:15:21.997412 containerd[1981]: 2025-12-16 13:15:21.925 [INFO][5214] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.92.64/26 handle="k8s-pod-network.3f822bbd61c414164b7a34f47c02910821f5f3aba34cf78f0db335c653ec9882" host="ip-172-31-28-249" Dec 16 13:15:21.997412 containerd[1981]: 2025-12-16 13:15:21.928 [INFO][5214] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3f822bbd61c414164b7a34f47c02910821f5f3aba34cf78f0db335c653ec9882 Dec 16 13:15:21.997412 containerd[1981]: 2025-12-16 13:15:21.935 [INFO][5214] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.92.64/26 handle="k8s-pod-network.3f822bbd61c414164b7a34f47c02910821f5f3aba34cf78f0db335c653ec9882" host="ip-172-31-28-249" Dec 16 13:15:21.997412 containerd[1981]: 2025-12-16 13:15:21.949 [INFO][5214] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.92.70/26] block=192.168.92.64/26 handle="k8s-pod-network.3f822bbd61c414164b7a34f47c02910821f5f3aba34cf78f0db335c653ec9882" host="ip-172-31-28-249" Dec 16 13:15:21.997412 containerd[1981]: 2025-12-16 13:15:21.949 [INFO][5214] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.70/26] handle="k8s-pod-network.3f822bbd61c414164b7a34f47c02910821f5f3aba34cf78f0db335c653ec9882" host="ip-172-31-28-249" Dec 16 13:15:21.997412 containerd[1981]: 2025-12-16 13:15:21.949 [INFO][5214] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:15:21.997412 containerd[1981]: 2025-12-16 13:15:21.949 [INFO][5214] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.92.70/26] IPv6=[] ContainerID="3f822bbd61c414164b7a34f47c02910821f5f3aba34cf78f0db335c653ec9882" HandleID="k8s-pod-network.3f822bbd61c414164b7a34f47c02910821f5f3aba34cf78f0db335c653ec9882" Workload="ip--172--31--28--249-k8s-coredns--66bc5c9577--4bxdj-eth0" Dec 16 13:15:21.999026 containerd[1981]: 2025-12-16 13:15:21.954 [INFO][5166] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3f822bbd61c414164b7a34f47c02910821f5f3aba34cf78f0db335c653ec9882" Namespace="kube-system" Pod="coredns-66bc5c9577-4bxdj" WorkloadEndpoint="ip--172--31--28--249-k8s-coredns--66bc5c9577--4bxdj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--249-k8s-coredns--66bc5c9577--4bxdj-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1f45875d-2734-4fdc-b236-7b99e52c65eb", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 14, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-249", ContainerID:"", Pod:"coredns-66bc5c9577-4bxdj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf4abec96b1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:15:21.999026 containerd[1981]: 2025-12-16 13:15:21.954 [INFO][5166] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.70/32] ContainerID="3f822bbd61c414164b7a34f47c02910821f5f3aba34cf78f0db335c653ec9882" Namespace="kube-system" Pod="coredns-66bc5c9577-4bxdj" WorkloadEndpoint="ip--172--31--28--249-k8s-coredns--66bc5c9577--4bxdj-eth0" Dec 16 13:15:21.999026 containerd[1981]: 2025-12-16 13:15:21.955 [INFO][5166] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf4abec96b1 ContainerID="3f822bbd61c414164b7a34f47c02910821f5f3aba34cf78f0db335c653ec9882" Namespace="kube-system" Pod="coredns-66bc5c9577-4bxdj" WorkloadEndpoint="ip--172--31--28--249-k8s-coredns--66bc5c9577--4bxdj-eth0" Dec 16 13:15:21.999026 containerd[1981]: 2025-12-16 13:15:21.969 [INFO][5166] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f822bbd61c414164b7a34f47c02910821f5f3aba34cf78f0db335c653ec9882" Namespace="kube-system" Pod="coredns-66bc5c9577-4bxdj" WorkloadEndpoint="ip--172--31--28--249-k8s-coredns--66bc5c9577--4bxdj-eth0" Dec 16 13:15:21.999026 containerd[1981]: 2025-12-16 13:15:21.971 [INFO][5166] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3f822bbd61c414164b7a34f47c02910821f5f3aba34cf78f0db335c653ec9882" Namespace="kube-system" Pod="coredns-66bc5c9577-4bxdj" WorkloadEndpoint="ip--172--31--28--249-k8s-coredns--66bc5c9577--4bxdj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--249-k8s-coredns--66bc5c9577--4bxdj-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1f45875d-2734-4fdc-b236-7b99e52c65eb", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 14, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-249", ContainerID:"3f822bbd61c414164b7a34f47c02910821f5f3aba34cf78f0db335c653ec9882", Pod:"coredns-66bc5c9577-4bxdj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf4abec96b1", MAC:"e6:2d:6d:e2:73:cb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:15:21.999026 containerd[1981]: 2025-12-16 13:15:21.993 [INFO][5166] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3f822bbd61c414164b7a34f47c02910821f5f3aba34cf78f0db335c653ec9882" Namespace="kube-system" Pod="coredns-66bc5c9577-4bxdj" WorkloadEndpoint="ip--172--31--28--249-k8s-coredns--66bc5c9577--4bxdj-eth0" Dec 16 13:15:22.063995 containerd[1981]: time="2025-12-16T13:15:22.063946761Z" level=info msg="connecting to shim 3f822bbd61c414164b7a34f47c02910821f5f3aba34cf78f0db335c653ec9882" address="unix:///run/containerd/s/0ffefd6d6b0f8dffc2ab432b2e8275cdd683781cc269833005060365eb94b442" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:15:22.093988 systemd-networkd[1850]: cali7e4d7d2e306: Link UP Dec 16 13:15:22.096744 systemd-networkd[1850]: cali7e4d7d2e306: Gained carrier Dec 16 13:15:22.120934 systemd[1]: Started cri-containerd-3f822bbd61c414164b7a34f47c02910821f5f3aba34cf78f0db335c653ec9882.scope - libcontainer container 3f822bbd61c414164b7a34f47c02910821f5f3aba34cf78f0db335c653ec9882. Dec 16 13:15:22.146274 kubelet[3348]: E1216 13:15:22.146225 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dbb4c8d86-x8rs5" podUID="d1b7644f-3acf-411e-a5e8-2f3cc85e178b" Dec 16 13:15:22.153088 kubelet[3348]: E1216 13:15:22.153039 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mtpks" podUID="849d73a2-70ae-4c16-a2df-5353f11e5191" Dec 16 13:15:22.161167 containerd[1981]: 2025-12-16 13:15:21.711 [INFO][5164] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--249-k8s-csi--node--driver--cncts-eth0 csi-node-driver- calico-system bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6 703 0 2025-12-16 13:14:54 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-28-249 csi-node-driver-cncts eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7e4d7d2e306 [] [] }} ContainerID="33f1c6241b0b6d4d1e75118c69d320c3870b9777c5045538906893926ca72f32" Namespace="calico-system" Pod="csi-node-driver-cncts" WorkloadEndpoint="ip--172--31--28--249-k8s-csi--node--driver--cncts-" Dec 16 13:15:22.161167 containerd[1981]: 2025-12-16 13:15:21.711 [INFO][5164] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="33f1c6241b0b6d4d1e75118c69d320c3870b9777c5045538906893926ca72f32" Namespace="calico-system" Pod="csi-node-driver-cncts" WorkloadEndpoint="ip--172--31--28--249-k8s-csi--node--driver--cncts-eth0" Dec 16 13:15:22.161167 containerd[1981]: 2025-12-16 13:15:21.823 [INFO][5216] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33f1c6241b0b6d4d1e75118c69d320c3870b9777c5045538906893926ca72f32" HandleID="k8s-pod-network.33f1c6241b0b6d4d1e75118c69d320c3870b9777c5045538906893926ca72f32" Workload="ip--172--31--28--249-k8s-csi--node--driver--cncts-eth0" Dec 16 13:15:22.161167 containerd[1981]: 2025-12-16 13:15:21.824 [INFO][5216] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="33f1c6241b0b6d4d1e75118c69d320c3870b9777c5045538906893926ca72f32" HandleID="k8s-pod-network.33f1c6241b0b6d4d1e75118c69d320c3870b9777c5045538906893926ca72f32" Workload="ip--172--31--28--249-k8s-csi--node--driver--cncts-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d94a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-249", "pod":"csi-node-driver-cncts", "timestamp":"2025-12-16 13:15:21.823834319 +0000 UTC"}, Hostname:"ip-172-31-28-249", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:15:22.161167 containerd[1981]: 2025-12-16 13:15:21.824 [INFO][5216] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:15:22.161167 containerd[1981]: 2025-12-16 13:15:21.950 [INFO][5216] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:15:22.161167 containerd[1981]: 2025-12-16 13:15:21.950 [INFO][5216] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-249' Dec 16 13:15:22.161167 containerd[1981]: 2025-12-16 13:15:22.002 [INFO][5216] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.33f1c6241b0b6d4d1e75118c69d320c3870b9777c5045538906893926ca72f32" host="ip-172-31-28-249" Dec 16 13:15:22.161167 containerd[1981]: 2025-12-16 13:15:22.011 [INFO][5216] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-249" Dec 16 13:15:22.161167 containerd[1981]: 2025-12-16 13:15:22.028 [INFO][5216] ipam/ipam.go 511: Trying affinity for 192.168.92.64/26 host="ip-172-31-28-249" Dec 16 13:15:22.161167 containerd[1981]: 2025-12-16 13:15:22.032 [INFO][5216] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.64/26 host="ip-172-31-28-249" Dec 16 13:15:22.161167 containerd[1981]: 2025-12-16 13:15:22.039 [INFO][5216] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.64/26 host="ip-172-31-28-249" Dec 16 13:15:22.161167 containerd[1981]: 2025-12-16 13:15:22.040 [INFO][5216] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.92.64/26 handle="k8s-pod-network.33f1c6241b0b6d4d1e75118c69d320c3870b9777c5045538906893926ca72f32" host="ip-172-31-28-249" Dec 16 13:15:22.161167 containerd[1981]: 2025-12-16 13:15:22.049 [INFO][5216] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.33f1c6241b0b6d4d1e75118c69d320c3870b9777c5045538906893926ca72f32 Dec 16 13:15:22.161167 containerd[1981]: 2025-12-16 13:15:22.060 [INFO][5216] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.92.64/26 handle="k8s-pod-network.33f1c6241b0b6d4d1e75118c69d320c3870b9777c5045538906893926ca72f32" host="ip-172-31-28-249" Dec 16 13:15:22.161167 containerd[1981]: 2025-12-16 13:15:22.078 [INFO][5216] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.92.71/26] block=192.168.92.64/26 handle="k8s-pod-network.33f1c6241b0b6d4d1e75118c69d320c3870b9777c5045538906893926ca72f32" host="ip-172-31-28-249" Dec 16 13:15:22.161167 containerd[1981]: 2025-12-16 13:15:22.079 [INFO][5216] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.71/26] handle="k8s-pod-network.33f1c6241b0b6d4d1e75118c69d320c3870b9777c5045538906893926ca72f32" host="ip-172-31-28-249" Dec 16 13:15:22.161167 containerd[1981]: 2025-12-16 13:15:22.079 [INFO][5216] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:15:22.161167 containerd[1981]: 2025-12-16 13:15:22.079 [INFO][5216] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.92.71/26] IPv6=[] ContainerID="33f1c6241b0b6d4d1e75118c69d320c3870b9777c5045538906893926ca72f32" HandleID="k8s-pod-network.33f1c6241b0b6d4d1e75118c69d320c3870b9777c5045538906893926ca72f32" Workload="ip--172--31--28--249-k8s-csi--node--driver--cncts-eth0" Dec 16 13:15:22.162940 containerd[1981]: 2025-12-16 13:15:22.086 [INFO][5164] cni-plugin/k8s.go 418: Populated endpoint ContainerID="33f1c6241b0b6d4d1e75118c69d320c3870b9777c5045538906893926ca72f32" Namespace="calico-system" Pod="csi-node-driver-cncts" WorkloadEndpoint="ip--172--31--28--249-k8s-csi--node--driver--cncts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--249-k8s-csi--node--driver--cncts-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 14, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-249", ContainerID:"", Pod:"csi-node-driver-cncts", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.92.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7e4d7d2e306", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:15:22.162940 containerd[1981]: 2025-12-16 13:15:22.086 [INFO][5164] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.71/32] ContainerID="33f1c6241b0b6d4d1e75118c69d320c3870b9777c5045538906893926ca72f32" Namespace="calico-system" Pod="csi-node-driver-cncts" WorkloadEndpoint="ip--172--31--28--249-k8s-csi--node--driver--cncts-eth0" Dec 16 13:15:22.162940 containerd[1981]: 2025-12-16 13:15:22.087 [INFO][5164] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7e4d7d2e306 ContainerID="33f1c6241b0b6d4d1e75118c69d320c3870b9777c5045538906893926ca72f32" Namespace="calico-system" Pod="csi-node-driver-cncts" WorkloadEndpoint="ip--172--31--28--249-k8s-csi--node--driver--cncts-eth0" Dec 16 13:15:22.162940 containerd[1981]: 2025-12-16 13:15:22.099 [INFO][5164] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="33f1c6241b0b6d4d1e75118c69d320c3870b9777c5045538906893926ca72f32" Namespace="calico-system" Pod="csi-node-driver-cncts" WorkloadEndpoint="ip--172--31--28--249-k8s-csi--node--driver--cncts-eth0" Dec 16 13:15:22.162940 containerd[1981]: 2025-12-16 13:15:22.105 [INFO][5164] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="33f1c6241b0b6d4d1e75118c69d320c3870b9777c5045538906893926ca72f32" Namespace="calico-system" Pod="csi-node-driver-cncts" WorkloadEndpoint="ip--172--31--28--249-k8s-csi--node--driver--cncts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--249-k8s-csi--node--driver--cncts-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 14, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-249", ContainerID:"33f1c6241b0b6d4d1e75118c69d320c3870b9777c5045538906893926ca72f32", Pod:"csi-node-driver-cncts", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.92.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7e4d7d2e306", MAC:"3a:1e:0a:38:19:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:15:22.162940 containerd[1981]: 2025-12-16 13:15:22.139 [INFO][5164] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="33f1c6241b0b6d4d1e75118c69d320c3870b9777c5045538906893926ca72f32" Namespace="calico-system" Pod="csi-node-driver-cncts" WorkloadEndpoint="ip--172--31--28--249-k8s-csi--node--driver--cncts-eth0" Dec 16 13:15:22.186736 systemd-networkd[1850]: califba71c70843: Gained IPv6LL Dec 16 13:15:22.231364 containerd[1981]: time="2025-12-16T13:15:22.231314628Z" level=info msg="connecting to shim 33f1c6241b0b6d4d1e75118c69d320c3870b9777c5045538906893926ca72f32" address="unix:///run/containerd/s/c2e5c7fb74b40264cef3109febca465b5a5df9facb1ea186468abba84f54872e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:15:22.268581 systemd-networkd[1850]: cali4170e447b3a: Link UP Dec 16 13:15:22.273743 systemd-networkd[1850]: cali4170e447b3a: Gained carrier Dec 16 13:15:22.296844 systemd[1]: Started cri-containerd-33f1c6241b0b6d4d1e75118c69d320c3870b9777c5045538906893926ca72f32.scope - libcontainer container 33f1c6241b0b6d4d1e75118c69d320c3870b9777c5045538906893926ca72f32. Dec 16 13:15:22.328233 containerd[1981]: 2025-12-16 13:15:21.714 [INFO][5152] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--249-k8s-coredns--66bc5c9577--mpggz-eth0 coredns-66bc5c9577- kube-system 5e30039f-976c-4f39-a91e-eac0996660a4 819 0 2025-12-16 13:14:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-249 coredns-66bc5c9577-mpggz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4170e447b3a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="8530f0f25f366686a57c0d84d4682dc304cfb5d3b9f536e517e8a9c2a9d49dba" Namespace="kube-system" Pod="coredns-66bc5c9577-mpggz" WorkloadEndpoint="ip--172--31--28--249-k8s-coredns--66bc5c9577--mpggz-" Dec 16 13:15:22.328233 containerd[1981]: 2025-12-16 13:15:21.714 [INFO][5152] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8530f0f25f366686a57c0d84d4682dc304cfb5d3b9f536e517e8a9c2a9d49dba" Namespace="kube-system" Pod="coredns-66bc5c9577-mpggz" WorkloadEndpoint="ip--172--31--28--249-k8s-coredns--66bc5c9577--mpggz-eth0" Dec 16 13:15:22.328233 containerd[1981]: 2025-12-16 13:15:21.830 [INFO][5220] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8530f0f25f366686a57c0d84d4682dc304cfb5d3b9f536e517e8a9c2a9d49dba" HandleID="k8s-pod-network.8530f0f25f366686a57c0d84d4682dc304cfb5d3b9f536e517e8a9c2a9d49dba" Workload="ip--172--31--28--249-k8s-coredns--66bc5c9577--mpggz-eth0" Dec 16 13:15:22.328233 containerd[1981]: 2025-12-16 13:15:21.831 [INFO][5220] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8530f0f25f366686a57c0d84d4682dc304cfb5d3b9f536e517e8a9c2a9d49dba" HandleID="k8s-pod-network.8530f0f25f366686a57c0d84d4682dc304cfb5d3b9f536e517e8a9c2a9d49dba" Workload="ip--172--31--28--249-k8s-coredns--66bc5c9577--mpggz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d58f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-249", "pod":"coredns-66bc5c9577-mpggz", "timestamp":"2025-12-16 13:15:21.830863519 +0000 UTC"}, Hostname:"ip-172-31-28-249", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 13:15:22.328233 containerd[1981]: 2025-12-16 13:15:21.831 [INFO][5220] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 13:15:22.328233 containerd[1981]: 2025-12-16 13:15:22.079 [INFO][5220] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 13:15:22.328233 containerd[1981]: 2025-12-16 13:15:22.080 [INFO][5220] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-249' Dec 16 13:15:22.328233 containerd[1981]: 2025-12-16 13:15:22.119 [INFO][5220] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8530f0f25f366686a57c0d84d4682dc304cfb5d3b9f536e517e8a9c2a9d49dba" host="ip-172-31-28-249" Dec 16 13:15:22.328233 containerd[1981]: 2025-12-16 13:15:22.141 [INFO][5220] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-249" Dec 16 13:15:22.328233 containerd[1981]: 2025-12-16 13:15:22.159 [INFO][5220] ipam/ipam.go 511: Trying affinity for 192.168.92.64/26 host="ip-172-31-28-249" Dec 16 13:15:22.328233 containerd[1981]: 2025-12-16 13:15:22.167 [INFO][5220] ipam/ipam.go 158: Attempting to load block cidr=192.168.92.64/26 host="ip-172-31-28-249" Dec 16 13:15:22.328233 containerd[1981]: 2025-12-16 13:15:22.181 [INFO][5220] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.92.64/26 host="ip-172-31-28-249" Dec 16 13:15:22.328233 containerd[1981]: 2025-12-16 13:15:22.181 [INFO][5220] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.92.64/26 handle="k8s-pod-network.8530f0f25f366686a57c0d84d4682dc304cfb5d3b9f536e517e8a9c2a9d49dba" host="ip-172-31-28-249" Dec 16 13:15:22.328233 containerd[1981]: 2025-12-16 13:15:22.192 [INFO][5220] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8530f0f25f366686a57c0d84d4682dc304cfb5d3b9f536e517e8a9c2a9d49dba Dec 16 13:15:22.328233 containerd[1981]: 2025-12-16 13:15:22.217 [INFO][5220] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.92.64/26 handle="k8s-pod-network.8530f0f25f366686a57c0d84d4682dc304cfb5d3b9f536e517e8a9c2a9d49dba" host="ip-172-31-28-249" Dec 16 13:15:22.328233 containerd[1981]: 2025-12-16 13:15:22.246 [INFO][5220] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.92.72/26] block=192.168.92.64/26 handle="k8s-pod-network.8530f0f25f366686a57c0d84d4682dc304cfb5d3b9f536e517e8a9c2a9d49dba" host="ip-172-31-28-249" Dec 16 13:15:22.328233 containerd[1981]: 2025-12-16 13:15:22.246 [INFO][5220] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.92.72/26] handle="k8s-pod-network.8530f0f25f366686a57c0d84d4682dc304cfb5d3b9f536e517e8a9c2a9d49dba" host="ip-172-31-28-249" Dec 16 13:15:22.328233 containerd[1981]: 2025-12-16 13:15:22.246 [INFO][5220] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 13:15:22.328233 containerd[1981]: 2025-12-16 13:15:22.246 [INFO][5220] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.92.72/26] IPv6=[] ContainerID="8530f0f25f366686a57c0d84d4682dc304cfb5d3b9f536e517e8a9c2a9d49dba" HandleID="k8s-pod-network.8530f0f25f366686a57c0d84d4682dc304cfb5d3b9f536e517e8a9c2a9d49dba" Workload="ip--172--31--28--249-k8s-coredns--66bc5c9577--mpggz-eth0" Dec 16 13:15:22.329141 containerd[1981]: 2025-12-16 13:15:22.254 [INFO][5152] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8530f0f25f366686a57c0d84d4682dc304cfb5d3b9f536e517e8a9c2a9d49dba" Namespace="kube-system" Pod="coredns-66bc5c9577-mpggz" WorkloadEndpoint="ip--172--31--28--249-k8s-coredns--66bc5c9577--mpggz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--249-k8s-coredns--66bc5c9577--mpggz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5e30039f-976c-4f39-a91e-eac0996660a4", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 14, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-249", ContainerID:"", Pod:"coredns-66bc5c9577-mpggz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4170e447b3a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:15:22.329141 containerd[1981]: 2025-12-16 13:15:22.255 [INFO][5152] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.92.72/32] ContainerID="8530f0f25f366686a57c0d84d4682dc304cfb5d3b9f536e517e8a9c2a9d49dba" Namespace="kube-system" Pod="coredns-66bc5c9577-mpggz" WorkloadEndpoint="ip--172--31--28--249-k8s-coredns--66bc5c9577--mpggz-eth0" Dec 16 13:15:22.329141 containerd[1981]: 2025-12-16 13:15:22.256 [INFO][5152] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4170e447b3a ContainerID="8530f0f25f366686a57c0d84d4682dc304cfb5d3b9f536e517e8a9c2a9d49dba" Namespace="kube-system" Pod="coredns-66bc5c9577-mpggz" WorkloadEndpoint="ip--172--31--28--249-k8s-coredns--66bc5c9577--mpggz-eth0" Dec 16 13:15:22.329141 containerd[1981]: 2025-12-16 13:15:22.272 [INFO][5152] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8530f0f25f366686a57c0d84d4682dc304cfb5d3b9f536e517e8a9c2a9d49dba" Namespace="kube-system" Pod="coredns-66bc5c9577-mpggz" WorkloadEndpoint="ip--172--31--28--249-k8s-coredns--66bc5c9577--mpggz-eth0" Dec 16 13:15:22.329141 containerd[1981]: 2025-12-16 13:15:22.278 [INFO][5152] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8530f0f25f366686a57c0d84d4682dc304cfb5d3b9f536e517e8a9c2a9d49dba" Namespace="kube-system" Pod="coredns-66bc5c9577-mpggz" WorkloadEndpoint="ip--172--31--28--249-k8s-coredns--66bc5c9577--mpggz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--249-k8s-coredns--66bc5c9577--mpggz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5e30039f-976c-4f39-a91e-eac0996660a4", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 13, 14, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-249", ContainerID:"8530f0f25f366686a57c0d84d4682dc304cfb5d3b9f536e517e8a9c2a9d49dba", Pod:"coredns-66bc5c9577-mpggz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4170e447b3a", MAC:"96:39:77:9c:c5:bd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 13:15:22.329141 containerd[1981]: 2025-12-16 13:15:22.320 [INFO][5152] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8530f0f25f366686a57c0d84d4682dc304cfb5d3b9f536e517e8a9c2a9d49dba" Namespace="kube-system" Pod="coredns-66bc5c9577-mpggz" WorkloadEndpoint="ip--172--31--28--249-k8s-coredns--66bc5c9577--mpggz-eth0" Dec 16 13:15:22.330832 containerd[1981]: time="2025-12-16T13:15:22.329834079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4bxdj,Uid:1f45875d-2734-4fdc-b236-7b99e52c65eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f822bbd61c414164b7a34f47c02910821f5f3aba34cf78f0db335c653ec9882\"" Dec 16 13:15:22.408247 containerd[1981]: time="2025-12-16T13:15:22.408091566Z" level=info msg="CreateContainer within sandbox \"3f822bbd61c414164b7a34f47c02910821f5f3aba34cf78f0db335c653ec9882\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:15:22.431836 containerd[1981]: time="2025-12-16T13:15:22.431788233Z" level=info msg="connecting to shim 8530f0f25f366686a57c0d84d4682dc304cfb5d3b9f536e517e8a9c2a9d49dba" address="unix:///run/containerd/s/d713b561195c0e8f9a871ad199de4c71e5931261a8153cae6c0d043d30c71558" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:15:22.466857 containerd[1981]: time="2025-12-16T13:15:22.466806619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dbb4c8d86-dk448,Uid:97ebb483-74aa-4963-b528-353f8ea2fd10,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0e258f794868be84d1c0cfc23a34813a21ee07d854e638e82c79c4a683e51037\"" Dec 16 13:15:22.479146 systemd[1]: Started cri-containerd-8530f0f25f366686a57c0d84d4682dc304cfb5d3b9f536e517e8a9c2a9d49dba.scope - libcontainer container 8530f0f25f366686a57c0d84d4682dc304cfb5d3b9f536e517e8a9c2a9d49dba. Dec 16 13:15:22.482645 containerd[1981]: time="2025-12-16T13:15:22.482546196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:15:22.492005 containerd[1981]: time="2025-12-16T13:15:22.491976309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cncts,Uid:bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6,Namespace:calico-system,Attempt:0,} returns sandbox id \"33f1c6241b0b6d4d1e75118c69d320c3870b9777c5045538906893926ca72f32\"" Dec 16 13:15:22.496369 containerd[1981]: time="2025-12-16T13:15:22.495804159Z" level=info msg="Container ca01f2cb250bb035e9b1b6fa4e6ea040a6dfce1d9ddc754093da34bd8a37bc07: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:15:22.522709 containerd[1981]: time="2025-12-16T13:15:22.522677555Z" level=info msg="CreateContainer within sandbox \"3f822bbd61c414164b7a34f47c02910821f5f3aba34cf78f0db335c653ec9882\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ca01f2cb250bb035e9b1b6fa4e6ea040a6dfce1d9ddc754093da34bd8a37bc07\"" Dec 16 13:15:22.524940 containerd[1981]: time="2025-12-16T13:15:22.523373454Z" level=info msg="StartContainer for \"ca01f2cb250bb035e9b1b6fa4e6ea040a6dfce1d9ddc754093da34bd8a37bc07\"" Dec 16 13:15:22.526127 containerd[1981]: time="2025-12-16T13:15:22.526052261Z" level=info msg="connecting to shim ca01f2cb250bb035e9b1b6fa4e6ea040a6dfce1d9ddc754093da34bd8a37bc07" address="unix:///run/containerd/s/0ffefd6d6b0f8dffc2ab432b2e8275cdd683781cc269833005060365eb94b442" protocol=ttrpc version=3 Dec 16 13:15:22.562138 systemd[1]: Started cri-containerd-ca01f2cb250bb035e9b1b6fa4e6ea040a6dfce1d9ddc754093da34bd8a37bc07.scope - libcontainer container ca01f2cb250bb035e9b1b6fa4e6ea040a6dfce1d9ddc754093da34bd8a37bc07. Dec 16 13:15:22.622462 containerd[1981]: time="2025-12-16T13:15:22.622399059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mpggz,Uid:5e30039f-976c-4f39-a91e-eac0996660a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"8530f0f25f366686a57c0d84d4682dc304cfb5d3b9f536e517e8a9c2a9d49dba\"" Dec 16 13:15:22.633463 containerd[1981]: time="2025-12-16T13:15:22.633263501Z" level=info msg="CreateContainer within sandbox \"8530f0f25f366686a57c0d84d4682dc304cfb5d3b9f536e517e8a9c2a9d49dba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:15:22.640898 containerd[1981]: time="2025-12-16T13:15:22.640867095Z" level=info msg="StartContainer for \"ca01f2cb250bb035e9b1b6fa4e6ea040a6dfce1d9ddc754093da34bd8a37bc07\" returns successfully" Dec 16 13:15:22.655593 containerd[1981]: time="2025-12-16T13:15:22.655260064Z" level=info msg="Container 30b1dd673fe78efae7ae4cd71f790dd633e16a9f07eea8cc7d852213225c9bc4: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:15:22.661294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2531516214.mount: Deactivated successfully. Dec 16 13:15:22.670431 containerd[1981]: time="2025-12-16T13:15:22.670361305Z" level=info msg="CreateContainer within sandbox \"8530f0f25f366686a57c0d84d4682dc304cfb5d3b9f536e517e8a9c2a9d49dba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"30b1dd673fe78efae7ae4cd71f790dd633e16a9f07eea8cc7d852213225c9bc4\"" Dec 16 13:15:22.672483 containerd[1981]: time="2025-12-16T13:15:22.672208026Z" level=info msg="StartContainer for \"30b1dd673fe78efae7ae4cd71f790dd633e16a9f07eea8cc7d852213225c9bc4\"" Dec 16 13:15:22.673034 containerd[1981]: time="2025-12-16T13:15:22.672996954Z" level=info msg="connecting to shim 30b1dd673fe78efae7ae4cd71f790dd633e16a9f07eea8cc7d852213225c9bc4" address="unix:///run/containerd/s/d713b561195c0e8f9a871ad199de4c71e5931261a8153cae6c0d043d30c71558" protocol=ttrpc version=3 Dec 16 13:15:22.703791 systemd[1]: Started cri-containerd-30b1dd673fe78efae7ae4cd71f790dd633e16a9f07eea8cc7d852213225c9bc4.scope - libcontainer container 30b1dd673fe78efae7ae4cd71f790dd633e16a9f07eea8cc7d852213225c9bc4. Dec 16 13:15:22.744816 containerd[1981]: time="2025-12-16T13:15:22.744778094Z" level=info msg="StartContainer for \"30b1dd673fe78efae7ae4cd71f790dd633e16a9f07eea8cc7d852213225c9bc4\" returns successfully" Dec 16 13:15:22.782866 containerd[1981]: time="2025-12-16T13:15:22.782710965Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:22.785141 containerd[1981]: time="2025-12-16T13:15:22.785097691Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:15:22.785460 containerd[1981]: time="2025-12-16T13:15:22.785180157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:15:22.786446 kubelet[3348]: E1216 13:15:22.785541 3348 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:15:22.786446 kubelet[3348]: E1216 13:15:22.786416 3348 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:15:22.786643 kubelet[3348]: E1216 13:15:22.786603 3348 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5dbb4c8d86-dk448_calico-apiserver(97ebb483-74aa-4963-b528-353f8ea2fd10): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:22.786678 kubelet[3348]: E1216 13:15:22.786638 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dbb4c8d86-dk448" podUID="97ebb483-74aa-4963-b528-353f8ea2fd10" Dec 16 13:15:22.788469 containerd[1981]: time="2025-12-16T13:15:22.788352962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:15:23.051362 containerd[1981]: time="2025-12-16T13:15:23.051257534Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:23.053400 containerd[1981]: time="2025-12-16T13:15:23.053346873Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:15:23.053498 containerd[1981]: time="2025-12-16T13:15:23.053436486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:15:23.053876 kubelet[3348]: E1216 13:15:23.053699 3348 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:15:23.053876 kubelet[3348]: E1216 13:15:23.053855 3348 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:15:23.062077 kubelet[3348]: E1216 13:15:23.062025 3348 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-cncts_calico-system(bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:23.063956 containerd[1981]: time="2025-12-16T13:15:23.063919147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:15:23.145925 kubelet[3348]: E1216 13:15:23.145874 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dbb4c8d86-dk448" podUID="97ebb483-74aa-4963-b528-353f8ea2fd10" Dec 16 13:15:23.207342 kubelet[3348]: I1216 13:15:23.207282 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4bxdj" podStartSLOduration=46.19902014 podStartE2EDuration="46.19902014s" podCreationTimestamp="2025-12-16 13:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:15:23.198572721 +0000 UTC m=+51.960805692" watchObservedRunningTime="2025-12-16 13:15:23.19902014 +0000 UTC m=+51.961253120" Dec 16 13:15:23.218551 kubelet[3348]: I1216 13:15:23.218491 3348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mpggz" podStartSLOduration=46.218473645 podStartE2EDuration="46.218473645s" podCreationTimestamp="2025-12-16 13:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:15:23.215783953 +0000 UTC m=+51.978016950" watchObservedRunningTime="2025-12-16 13:15:23.218473645 +0000 UTC m=+51.980706619" Dec 16 13:15:23.328388 containerd[1981]: time="2025-12-16T13:15:23.328247454Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:23.330549 containerd[1981]: time="2025-12-16T13:15:23.330415445Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:15:23.330831 containerd[1981]: time="2025-12-16T13:15:23.330795619Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:15:23.331988 kubelet[3348]: E1216 13:15:23.331938 3348 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:15:23.332228 kubelet[3348]: E1216 13:15:23.331998 3348 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:15:23.333628 kubelet[3348]: E1216 13:15:23.332392 3348 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-cncts_calico-system(bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:23.333628 kubelet[3348]: E1216 13:15:23.333297 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cncts" podUID="bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6" Dec 16 13:15:23.337908 systemd-networkd[1850]: calibebc517ac9b: Gained IPv6LL Dec 16 13:15:23.530227 systemd-networkd[1850]: cali7e4d7d2e306: Gained IPv6LL Dec 16 13:15:23.978177 systemd-networkd[1850]: califf4abec96b1: Gained IPv6LL Dec 16 13:15:24.184025 kubelet[3348]: E1216 13:15:24.183965 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dbb4c8d86-dk448" podUID="97ebb483-74aa-4963-b528-353f8ea2fd10" Dec 16 13:15:24.187968 kubelet[3348]: E1216 13:15:24.187896 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cncts" podUID="bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6" Dec 16 13:15:24.297912 systemd-networkd[1850]: cali4170e447b3a: Gained IPv6LL Dec 16 13:15:26.332990 ntpd[2233]: Listen normally on 6 vxlan.calico 192.168.92.64:123 Dec 16 13:15:26.333052 ntpd[2233]: Listen normally on 7 cali1d98b5924fb [fe80::ecee:eeff:feee:eeee%4]:123 Dec 16 13:15:26.334957 ntpd[2233]: 16 Dec 13:15:26 ntpd[2233]: Listen normally on 6 vxlan.calico 192.168.92.64:123 Dec 16 13:15:26.334957 ntpd[2233]: 16 Dec 13:15:26 ntpd[2233]: Listen normally on 7 cali1d98b5924fb [fe80::ecee:eeff:feee:eeee%4]:123 Dec 16 13:15:26.334957 ntpd[2233]: 16 Dec 13:15:26 ntpd[2233]: Listen normally on 8 calid2c4a04bff6 [fe80::ecee:eeff:feee:eeee%5]:123 Dec 16 13:15:26.334957 ntpd[2233]: 16 Dec 13:15:26 ntpd[2233]: Listen normally on 9 vxlan.calico [fe80::6414:9aff:fe3e:6dd4%6]:123 Dec 16 13:15:26.334957 ntpd[2233]: 16 Dec 13:15:26 ntpd[2233]: Listen normally on 10 cali5f703d26dc5 [fe80::ecee:eeff:feee:eeee%9]:123 Dec 16 13:15:26.334957 ntpd[2233]: 16 Dec 13:15:26 ntpd[2233]: Listen normally on 11 califba71c70843 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 16 13:15:26.334957 ntpd[2233]: 16 Dec 13:15:26 ntpd[2233]: Listen normally on 12 calibebc517ac9b [fe80::ecee:eeff:feee:eeee%11]:123 Dec 16 13:15:26.334957 ntpd[2233]: 16 Dec 13:15:26 ntpd[2233]: Listen normally on 13 califf4abec96b1 [fe80::ecee:eeff:feee:eeee%12]:123 Dec 16 13:15:26.334957 ntpd[2233]: 16 Dec 13:15:26 ntpd[2233]: Listen normally on 14 cali7e4d7d2e306 [fe80::ecee:eeff:feee:eeee%13]:123 Dec 16 13:15:26.334957 ntpd[2233]: 16 Dec 13:15:26 ntpd[2233]: Listen normally on 15 cali4170e447b3a [fe80::ecee:eeff:feee:eeee%14]:123 Dec 16 13:15:26.333081 ntpd[2233]: Listen normally on 8 calid2c4a04bff6 [fe80::ecee:eeff:feee:eeee%5]:123 Dec 16 13:15:26.333109 ntpd[2233]: Listen normally on 9 vxlan.calico [fe80::6414:9aff:fe3e:6dd4%6]:123 Dec 16 13:15:26.333128 ntpd[2233]: Listen normally on 10 cali5f703d26dc5 [fe80::ecee:eeff:feee:eeee%9]:123 Dec 16 13:15:26.333156 ntpd[2233]: Listen normally on 11 califba71c70843 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 16 13:15:26.333175 ntpd[2233]: Listen normally on 12 calibebc517ac9b [fe80::ecee:eeff:feee:eeee%11]:123 Dec 16 13:15:26.333195 ntpd[2233]: Listen normally on 13 califf4abec96b1 [fe80::ecee:eeff:feee:eeee%12]:123 Dec 16 13:15:26.333213 ntpd[2233]: Listen normally on 14 cali7e4d7d2e306 [fe80::ecee:eeff:feee:eeee%13]:123 Dec 16 13:15:26.333232 ntpd[2233]: Listen normally on 15 cali4170e447b3a [fe80::ecee:eeff:feee:eeee%14]:123 Dec 16 13:15:33.482444 containerd[1981]: time="2025-12-16T13:15:33.482368701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:15:33.744756 containerd[1981]: time="2025-12-16T13:15:33.744619469Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:33.747585 containerd[1981]: time="2025-12-16T13:15:33.747506402Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:15:33.747744 containerd[1981]: time="2025-12-16T13:15:33.747723619Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:15:33.747864 kubelet[3348]: E1216 13:15:33.747829 3348 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:15:33.748404 kubelet[3348]: E1216 13:15:33.747872 3348 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:15:33.748404 kubelet[3348]: E1216 13:15:33.748131 3348 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5dbb4c8d86-x8rs5_calico-apiserver(d1b7644f-3acf-411e-a5e8-2f3cc85e178b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:33.748491 containerd[1981]: time="2025-12-16T13:15:33.748215696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:15:33.748664 kubelet[3348]: E1216 13:15:33.748172 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dbb4c8d86-x8rs5" podUID="d1b7644f-3acf-411e-a5e8-2f3cc85e178b" Dec 16 13:15:33.970074 systemd[1]: Started sshd@9-172.31.28.249:22-139.178.68.195:51030.service - OpenSSH per-connection server daemon (139.178.68.195:51030). Dec 16 13:15:33.993678 containerd[1981]: time="2025-12-16T13:15:33.993624644Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:33.998700 containerd[1981]: time="2025-12-16T13:15:33.997404034Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:15:34.000316 containerd[1981]: time="2025-12-16T13:15:33.997433187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:15:34.000571 kubelet[3348]: E1216 13:15:34.000439 3348 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:15:34.000571 kubelet[3348]: E1216 13:15:34.000501 3348 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:15:34.002656 kubelet[3348]: E1216 13:15:34.002443 3348 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6cf79d7c7c-wlbjc_calico-system(283d7557-65a8-4b3b-9bfa-2489f569eafb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:34.002656 kubelet[3348]: E1216 13:15:34.002521 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cf79d7c7c-wlbjc" podUID="283d7557-65a8-4b3b-9bfa-2489f569eafb" Dec 16 13:15:34.237513 sshd[5545]: Accepted publickey for core from 139.178.68.195 port 51030 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:15:34.240904 sshd-session[5545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:34.247737 systemd-logind[1962]: New session 10 of user core. Dec 16 13:15:34.252770 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 13:15:34.488104 containerd[1981]: time="2025-12-16T13:15:34.488024355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:15:34.737328 containerd[1981]: time="2025-12-16T13:15:34.737151439Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:34.739324 containerd[1981]: time="2025-12-16T13:15:34.739267002Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:15:34.739457 containerd[1981]: time="2025-12-16T13:15:34.739356336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:15:34.740101 kubelet[3348]: E1216 13:15:34.739573 3348 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:15:34.740101 kubelet[3348]: E1216 13:15:34.739620 3348 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:15:34.740101 kubelet[3348]: E1216 13:15:34.739751 3348 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-fc58dbc98-4xrqt_calico-system(63986b45-f828-491f-8283-58bdcda10705): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:34.742166 containerd[1981]: time="2025-12-16T13:15:34.741420884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:15:34.993225 containerd[1981]: time="2025-12-16T13:15:34.992988709Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:34.995391 containerd[1981]: time="2025-12-16T13:15:34.995031282Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:15:34.995391 containerd[1981]: time="2025-12-16T13:15:34.995060372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:15:34.995590 kubelet[3348]: E1216 13:15:34.995314 3348 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:15:34.995590 kubelet[3348]: E1216 13:15:34.995363 3348 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:15:34.995590 kubelet[3348]: E1216 13:15:34.995451 3348 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-fc58dbc98-4xrqt_calico-system(63986b45-f828-491f-8283-58bdcda10705): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:34.996068 kubelet[3348]: E1216 13:15:34.995503 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-fc58dbc98-4xrqt" podUID="63986b45-f828-491f-8283-58bdcda10705" Dec 16 13:15:35.043281 sshd[5550]: Connection closed by 139.178.68.195 port 51030 Dec 16 13:15:35.043810 sshd-session[5545]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:35.049606 systemd[1]: sshd@9-172.31.28.249:22-139.178.68.195:51030.service: Deactivated successfully. Dec 16 13:15:35.051675 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 13:15:35.052772 systemd-logind[1962]: Session 10 logged out. Waiting for processes to exit. Dec 16 13:15:35.054802 systemd-logind[1962]: Removed session 10. Dec 16 13:15:36.480735 containerd[1981]: time="2025-12-16T13:15:36.480699677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:15:36.722355 containerd[1981]: time="2025-12-16T13:15:36.721753378Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:36.724413 containerd[1981]: time="2025-12-16T13:15:36.724361095Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:15:36.724614 containerd[1981]: time="2025-12-16T13:15:36.724468011Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:15:36.724742 kubelet[3348]: E1216 13:15:36.724685 3348 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:15:36.724742 kubelet[3348]: E1216 13:15:36.724729 3348 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:15:36.725188 kubelet[3348]: E1216 13:15:36.724818 3348 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-mtpks_calico-system(849d73a2-70ae-4c16-a2df-5353f11e5191): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:36.725188 kubelet[3348]: E1216 13:15:36.724847 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mtpks" podUID="849d73a2-70ae-4c16-a2df-5353f11e5191" Dec 16 13:15:37.482789 containerd[1981]: time="2025-12-16T13:15:37.482752990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:15:37.742875 containerd[1981]: time="2025-12-16T13:15:37.742733352Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:37.745070 containerd[1981]: time="2025-12-16T13:15:37.744915433Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:15:37.745202 containerd[1981]: time="2025-12-16T13:15:37.745171893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:15:37.745405 kubelet[3348]: E1216 13:15:37.745316 3348 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:15:37.745405 kubelet[3348]: E1216 13:15:37.745355 3348 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:15:37.745949 kubelet[3348]: E1216 13:15:37.745424 3348 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5dbb4c8d86-dk448_calico-apiserver(97ebb483-74aa-4963-b528-353f8ea2fd10): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:37.745949 kubelet[3348]: E1216 13:15:37.745456 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dbb4c8d86-dk448" podUID="97ebb483-74aa-4963-b528-353f8ea2fd10" Dec 16 13:15:39.482497 containerd[1981]: time="2025-12-16T13:15:39.482078679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:15:39.780953 containerd[1981]: time="2025-12-16T13:15:39.780824778Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:39.783005 containerd[1981]: time="2025-12-16T13:15:39.782958424Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:15:39.783183 containerd[1981]: time="2025-12-16T13:15:39.783046468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:15:39.783277 kubelet[3348]: E1216 13:15:39.783233 3348 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:15:39.783976 kubelet[3348]: E1216 13:15:39.783294 3348 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:15:39.783976 kubelet[3348]: E1216 13:15:39.783392 3348 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-cncts_calico-system(bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:39.784878 containerd[1981]: time="2025-12-16T13:15:39.784857101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:15:40.075055 containerd[1981]: time="2025-12-16T13:15:40.074834999Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:15:40.078520 containerd[1981]: time="2025-12-16T13:15:40.076908457Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:15:40.078826 systemd[1]: Started sshd@10-172.31.28.249:22-139.178.68.195:51034.service - OpenSSH per-connection server daemon (139.178.68.195:51034). Dec 16 13:15:40.082368 kubelet[3348]: E1216 13:15:40.081913 3348 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:15:40.082368 kubelet[3348]: E1216 13:15:40.081962 3348 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:15:40.082368 kubelet[3348]: E1216 13:15:40.082264 3348 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-cncts_calico-system(bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:15:40.085881 kubelet[3348]: E1216 13:15:40.083949 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cncts" podUID="bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6" Dec 16 13:15:40.091642 containerd[1981]: time="2025-12-16T13:15:40.077002469Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:15:40.253511 sshd[5571]: Accepted publickey for core from 139.178.68.195 port 51034 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:15:40.255231 sshd-session[5571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:40.260726 systemd-logind[1962]: New session 11 of user core. Dec 16 13:15:40.265952 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 13:15:40.466882 sshd[5574]: Connection closed by 139.178.68.195 port 51034 Dec 16 13:15:40.467489 sshd-session[5571]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:40.471743 systemd[1]: sshd@10-172.31.28.249:22-139.178.68.195:51034.service: Deactivated successfully. Dec 16 13:15:40.473871 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 13:15:40.475003 systemd-logind[1962]: Session 11 logged out. Waiting for processes to exit. Dec 16 13:15:40.476676 systemd-logind[1962]: Removed session 11. Dec 16 13:15:45.503641 systemd[1]: Started sshd@11-172.31.28.249:22-139.178.68.195:54448.service - OpenSSH per-connection server daemon (139.178.68.195:54448). Dec 16 13:15:45.669325 sshd[5589]: Accepted publickey for core from 139.178.68.195 port 54448 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:15:45.670729 sshd-session[5589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:45.676008 systemd-logind[1962]: New session 12 of user core. Dec 16 13:15:45.686797 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 13:15:45.875160 sshd[5592]: Connection closed by 139.178.68.195 port 54448 Dec 16 13:15:45.875910 sshd-session[5589]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:45.880799 systemd[1]: sshd@11-172.31.28.249:22-139.178.68.195:54448.service: Deactivated successfully. Dec 16 13:15:45.883131 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 13:15:45.884759 systemd-logind[1962]: Session 12 logged out. Waiting for processes to exit. Dec 16 13:15:45.886857 systemd-logind[1962]: Removed session 12. Dec 16 13:15:45.910822 systemd[1]: Started sshd@12-172.31.28.249:22-139.178.68.195:54452.service - OpenSSH per-connection server daemon (139.178.68.195:54452). Dec 16 13:15:46.084619 sshd[5605]: Accepted publickey for core from 139.178.68.195 port 54452 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:15:46.086180 sshd-session[5605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:46.091621 systemd-logind[1962]: New session 13 of user core. Dec 16 13:15:46.100824 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 13:15:46.363799 sshd[5608]: Connection closed by 139.178.68.195 port 54452 Dec 16 13:15:46.363704 sshd-session[5605]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:46.370703 systemd-logind[1962]: Session 13 logged out. Waiting for processes to exit. Dec 16 13:15:46.371249 systemd[1]: sshd@12-172.31.28.249:22-139.178.68.195:54452.service: Deactivated successfully. Dec 16 13:15:46.376143 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 13:15:46.379130 systemd-logind[1962]: Removed session 13. Dec 16 13:15:46.396626 systemd[1]: Started sshd@13-172.31.28.249:22-139.178.68.195:54454.service - OpenSSH per-connection server daemon (139.178.68.195:54454). Dec 16 13:15:46.586190 sshd[5617]: Accepted publickey for core from 139.178.68.195 port 54454 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:15:46.587623 sshd-session[5617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:46.594312 systemd-logind[1962]: New session 14 of user core. Dec 16 13:15:46.600789 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 13:15:46.835107 sshd[5620]: Connection closed by 139.178.68.195 port 54454 Dec 16 13:15:46.835995 sshd-session[5617]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:46.839766 systemd[1]: sshd@13-172.31.28.249:22-139.178.68.195:54454.service: Deactivated successfully. Dec 16 13:15:46.841740 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 13:15:46.842732 systemd-logind[1962]: Session 14 logged out. Waiting for processes to exit. Dec 16 13:15:46.844630 systemd-logind[1962]: Removed session 14. Dec 16 13:15:47.482137 kubelet[3348]: E1216 13:15:47.482058 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dbb4c8d86-x8rs5" podUID="d1b7644f-3acf-411e-a5e8-2f3cc85e178b" Dec 16 13:15:47.483141 kubelet[3348]: E1216 13:15:47.482553 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cf79d7c7c-wlbjc" podUID="283d7557-65a8-4b3b-9bfa-2489f569eafb" Dec 16 13:15:48.482679 kubelet[3348]: E1216 13:15:48.482622 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-fc58dbc98-4xrqt" podUID="63986b45-f828-491f-8283-58bdcda10705" Dec 16 13:15:49.483175 kubelet[3348]: E1216 13:15:49.483118 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mtpks" podUID="849d73a2-70ae-4c16-a2df-5353f11e5191" Dec 16 13:15:50.481173 kubelet[3348]: E1216 13:15:50.481125 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dbb4c8d86-dk448" podUID="97ebb483-74aa-4963-b528-353f8ea2fd10" Dec 16 13:15:51.870675 systemd[1]: Started sshd@14-172.31.28.249:22-139.178.68.195:34708.service - OpenSSH per-connection server daemon (139.178.68.195:34708). Dec 16 13:15:52.134995 sshd[5660]: Accepted publickey for core from 139.178.68.195 port 34708 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:15:52.140841 sshd-session[5660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:52.151527 systemd-logind[1962]: New session 15 of user core. Dec 16 13:15:52.154909 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 13:15:52.379026 sshd[5663]: Connection closed by 139.178.68.195 port 34708 Dec 16 13:15:52.379851 sshd-session[5660]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:52.388220 systemd[1]: sshd@14-172.31.28.249:22-139.178.68.195:34708.service: Deactivated successfully. Dec 16 13:15:52.388283 systemd-logind[1962]: Session 15 logged out. Waiting for processes to exit. Dec 16 13:15:52.391338 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 13:15:52.393512 systemd-logind[1962]: Removed session 15. Dec 16 13:15:53.484536 kubelet[3348]: E1216 13:15:53.484407 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cncts" podUID="bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6" Dec 16 13:15:57.415030 systemd[1]: Started sshd@15-172.31.28.249:22-139.178.68.195:34712.service - OpenSSH per-connection server daemon (139.178.68.195:34712). Dec 16 13:15:57.652595 sshd[5676]: Accepted publickey for core from 139.178.68.195 port 34712 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:15:57.655604 sshd-session[5676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:15:57.663660 systemd-logind[1962]: New session 16 of user core. Dec 16 13:15:57.666826 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 13:15:57.901227 sshd[5679]: Connection closed by 139.178.68.195 port 34712 Dec 16 13:15:57.901825 sshd-session[5676]: pam_unix(sshd:session): session closed for user core Dec 16 13:15:57.910936 systemd[1]: sshd@15-172.31.28.249:22-139.178.68.195:34712.service: Deactivated successfully. Dec 16 13:15:57.911375 systemd-logind[1962]: Session 16 logged out. Waiting for processes to exit. Dec 16 13:15:57.914707 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 13:15:57.916690 systemd-logind[1962]: Removed session 16. Dec 16 13:16:01.485577 containerd[1981]: time="2025-12-16T13:16:01.485522848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:16:01.778381 containerd[1981]: time="2025-12-16T13:16:01.777894061Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:16:01.782006 containerd[1981]: time="2025-12-16T13:16:01.781896309Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:16:01.782006 containerd[1981]: time="2025-12-16T13:16:01.781973137Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:16:01.789121 kubelet[3348]: E1216 13:16:01.787097 3348 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:16:01.789121 kubelet[3348]: E1216 13:16:01.789057 3348 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:16:01.789778 kubelet[3348]: E1216 13:16:01.789263 3348 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5dbb4c8d86-x8rs5_calico-apiserver(d1b7644f-3acf-411e-a5e8-2f3cc85e178b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:16:01.789778 kubelet[3348]: E1216 13:16:01.789333 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dbb4c8d86-x8rs5" podUID="d1b7644f-3acf-411e-a5e8-2f3cc85e178b" Dec 16 13:16:01.790513 containerd[1981]: time="2025-12-16T13:16:01.790472158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:16:02.140986 containerd[1981]: time="2025-12-16T13:16:02.140930407Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:16:02.143483 containerd[1981]: time="2025-12-16T13:16:02.143395109Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:16:02.143772 containerd[1981]: time="2025-12-16T13:16:02.143398848Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:16:02.143863 kubelet[3348]: E1216 13:16:02.143794 3348 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:16:02.143942 kubelet[3348]: E1216 13:16:02.143850 3348 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:16:02.144152 kubelet[3348]: E1216 13:16:02.143992 3348 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6cf79d7c7c-wlbjc_calico-system(283d7557-65a8-4b3b-9bfa-2489f569eafb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:16:02.144152 kubelet[3348]: E1216 13:16:02.144082 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cf79d7c7c-wlbjc" podUID="283d7557-65a8-4b3b-9bfa-2489f569eafb" Dec 16 13:16:02.486839 containerd[1981]: time="2025-12-16T13:16:02.486617866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:16:02.782608 containerd[1981]: time="2025-12-16T13:16:02.782363040Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:16:02.784860 containerd[1981]: time="2025-12-16T13:16:02.784789450Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:16:02.784860 containerd[1981]: time="2025-12-16T13:16:02.784816399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:16:02.785246 kubelet[3348]: E1216 13:16:02.785075 3348 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:16:02.785246 kubelet[3348]: E1216 13:16:02.785132 3348 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:16:02.785373 kubelet[3348]: E1216 13:16:02.785348 3348 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5dbb4c8d86-dk448_calico-apiserver(97ebb483-74aa-4963-b528-353f8ea2fd10): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:16:02.785525 kubelet[3348]: E1216 13:16:02.785480 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dbb4c8d86-dk448" podUID="97ebb483-74aa-4963-b528-353f8ea2fd10" Dec 16 13:16:02.786255 containerd[1981]: time="2025-12-16T13:16:02.786222492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:16:02.946389 systemd[1]: Started sshd@16-172.31.28.249:22-139.178.68.195:47124.service - OpenSSH per-connection server daemon (139.178.68.195:47124). Dec 16 13:16:03.058346 containerd[1981]: time="2025-12-16T13:16:03.057859481Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:16:03.060809 containerd[1981]: time="2025-12-16T13:16:03.060761397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:16:03.061000 containerd[1981]: time="2025-12-16T13:16:03.060802089Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:16:03.061124 kubelet[3348]: E1216 13:16:03.061077 3348 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:16:03.061494 kubelet[3348]: E1216 13:16:03.061137 3348 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:16:03.061494 kubelet[3348]: E1216 13:16:03.061230 3348 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-fc58dbc98-4xrqt_calico-system(63986b45-f828-491f-8283-58bdcda10705): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:16:03.063465 containerd[1981]: time="2025-12-16T13:16:03.063189502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:16:03.158432 sshd[5698]: Accepted publickey for core from 139.178.68.195 port 47124 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:16:03.162547 sshd-session[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:16:03.172827 systemd-logind[1962]: New session 17 of user core. Dec 16 13:16:03.186490 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 13:16:03.353537 containerd[1981]: time="2025-12-16T13:16:03.353483742Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:16:03.357062 containerd[1981]: time="2025-12-16T13:16:03.356860405Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:16:03.357062 containerd[1981]: time="2025-12-16T13:16:03.356859822Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:16:03.357804 kubelet[3348]: E1216 13:16:03.357750 3348 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:16:03.357804 kubelet[3348]: E1216 13:16:03.357802 3348 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:16:03.359719 kubelet[3348]: E1216 13:16:03.359675 3348 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-fc58dbc98-4xrqt_calico-system(63986b45-f828-491f-8283-58bdcda10705): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:16:03.359851 kubelet[3348]: E1216 13:16:03.359752 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-fc58dbc98-4xrqt" podUID="63986b45-f828-491f-8283-58bdcda10705" Dec 16 13:16:03.449402 sshd[5701]: Connection closed by 139.178.68.195 port 47124 Dec 16 13:16:03.450368 sshd-session[5698]: pam_unix(sshd:session): session closed for user core Dec 16 13:16:03.455323 systemd[1]: sshd@16-172.31.28.249:22-139.178.68.195:47124.service: Deactivated successfully. Dec 16 13:16:03.457859 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 13:16:03.459851 systemd-logind[1962]: Session 17 logged out. Waiting for processes to exit. Dec 16 13:16:03.461782 systemd-logind[1962]: Removed session 17. Dec 16 13:16:03.484938 containerd[1981]: time="2025-12-16T13:16:03.484398948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:16:03.754581 containerd[1981]: time="2025-12-16T13:16:03.754349731Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:16:03.756510 containerd[1981]: time="2025-12-16T13:16:03.756467182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:16:03.756760 containerd[1981]: time="2025-12-16T13:16:03.756484493Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:16:03.756894 kubelet[3348]: E1216 13:16:03.756855 3348 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:16:03.756949 kubelet[3348]: E1216 13:16:03.756914 3348 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:16:03.757026 kubelet[3348]: E1216 13:16:03.757006 3348 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-mtpks_calico-system(849d73a2-70ae-4c16-a2df-5353f11e5191): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:16:03.757429 kubelet[3348]: E1216 13:16:03.757390 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mtpks" podUID="849d73a2-70ae-4c16-a2df-5353f11e5191" Dec 16 13:16:07.484304 containerd[1981]: time="2025-12-16T13:16:07.483888657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:16:07.805672 containerd[1981]: time="2025-12-16T13:16:07.805523492Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:16:07.809588 containerd[1981]: time="2025-12-16T13:16:07.807669154Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:16:07.809588 containerd[1981]: time="2025-12-16T13:16:07.807792062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:16:07.809768 kubelet[3348]: E1216 13:16:07.808019 3348 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:16:07.809768 kubelet[3348]: E1216 13:16:07.808066 3348 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:16:07.809768 kubelet[3348]: E1216 13:16:07.808153 3348 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-cncts_calico-system(bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:16:07.810884 containerd[1981]: time="2025-12-16T13:16:07.810849167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:16:08.071105 containerd[1981]: time="2025-12-16T13:16:08.070971241Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:16:08.073392 containerd[1981]: time="2025-12-16T13:16:08.073330997Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:16:08.073735 containerd[1981]: time="2025-12-16T13:16:08.073446264Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:16:08.073810 kubelet[3348]: E1216 13:16:08.073664 3348 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:16:08.073810 kubelet[3348]: E1216 13:16:08.073714 3348 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:16:08.074804 kubelet[3348]: E1216 13:16:08.074669 3348 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-cncts_calico-system(bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:16:08.074804 kubelet[3348]: E1216 13:16:08.074746 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cncts" podUID="bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6" Dec 16 13:16:08.493267 systemd[1]: Started sshd@17-172.31.28.249:22-139.178.68.195:47130.service - OpenSSH per-connection server daemon (139.178.68.195:47130). Dec 16 13:16:08.782041 sshd[5716]: Accepted publickey for core from 139.178.68.195 port 47130 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:16:08.786168 sshd-session[5716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:16:08.797532 systemd-logind[1962]: New session 18 of user core. Dec 16 13:16:08.803750 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 13:16:09.402715 sshd[5719]: Connection closed by 139.178.68.195 port 47130 Dec 16 13:16:09.405094 sshd-session[5716]: pam_unix(sshd:session): session closed for user core Dec 16 13:16:09.413582 systemd[1]: sshd@17-172.31.28.249:22-139.178.68.195:47130.service: Deactivated successfully. Dec 16 13:16:09.414323 systemd-logind[1962]: Session 18 logged out. Waiting for processes to exit. Dec 16 13:16:09.417872 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 13:16:09.421862 systemd-logind[1962]: Removed session 18. Dec 16 13:16:09.439506 systemd[1]: Started sshd@18-172.31.28.249:22-139.178.68.195:47138.service - OpenSSH per-connection server daemon (139.178.68.195:47138). Dec 16 13:16:09.626618 sshd[5730]: Accepted publickey for core from 139.178.68.195 port 47138 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:16:09.629074 sshd-session[5730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:16:09.638856 systemd-logind[1962]: New session 19 of user core. Dec 16 13:16:09.645788 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 13:16:10.417712 sshd[5733]: Connection closed by 139.178.68.195 port 47138 Dec 16 13:16:10.420790 sshd-session[5730]: pam_unix(sshd:session): session closed for user core Dec 16 13:16:10.431098 systemd[1]: sshd@18-172.31.28.249:22-139.178.68.195:47138.service: Deactivated successfully. Dec 16 13:16:10.435413 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 13:16:10.440021 systemd-logind[1962]: Session 19 logged out. Waiting for processes to exit. Dec 16 13:16:10.459489 systemd[1]: Started sshd@19-172.31.28.249:22-139.178.68.195:52246.service - OpenSSH per-connection server daemon (139.178.68.195:52246). Dec 16 13:16:10.463303 systemd-logind[1962]: Removed session 19. Dec 16 13:16:10.669669 sshd[5743]: Accepted publickey for core from 139.178.68.195 port 52246 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:16:10.672819 sshd-session[5743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:16:10.685885 systemd-logind[1962]: New session 20 of user core. Dec 16 13:16:10.692067 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 13:16:11.717184 sshd[5746]: Connection closed by 139.178.68.195 port 52246 Dec 16 13:16:11.716753 sshd-session[5743]: pam_unix(sshd:session): session closed for user core Dec 16 13:16:11.726954 systemd[1]: sshd@19-172.31.28.249:22-139.178.68.195:52246.service: Deactivated successfully. Dec 16 13:16:11.728719 systemd-logind[1962]: Session 20 logged out. Waiting for processes to exit. Dec 16 13:16:11.732103 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 13:16:11.736107 systemd-logind[1962]: Removed session 20. Dec 16 13:16:11.754421 systemd[1]: Started sshd@20-172.31.28.249:22-139.178.68.195:52248.service - OpenSSH per-connection server daemon (139.178.68.195:52248). Dec 16 13:16:11.972426 sshd[5765]: Accepted publickey for core from 139.178.68.195 port 52248 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:16:11.973121 sshd-session[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:16:11.979077 systemd-logind[1962]: New session 21 of user core. Dec 16 13:16:11.983749 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 13:16:12.621926 sshd[5768]: Connection closed by 139.178.68.195 port 52248 Dec 16 13:16:12.624782 sshd-session[5765]: pam_unix(sshd:session): session closed for user core Dec 16 13:16:12.631829 systemd-logind[1962]: Session 21 logged out. Waiting for processes to exit. Dec 16 13:16:12.632287 systemd[1]: sshd@20-172.31.28.249:22-139.178.68.195:52248.service: Deactivated successfully. Dec 16 13:16:12.638070 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 13:16:12.657303 systemd-logind[1962]: Removed session 21. Dec 16 13:16:12.662889 systemd[1]: Started sshd@21-172.31.28.249:22-139.178.68.195:52262.service - OpenSSH per-connection server daemon (139.178.68.195:52262). Dec 16 13:16:12.851192 sshd[5778]: Accepted publickey for core from 139.178.68.195 port 52262 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:16:12.853714 sshd-session[5778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:16:12.863709 systemd-logind[1962]: New session 22 of user core. Dec 16 13:16:12.870810 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 13:16:13.219604 sshd[5783]: Connection closed by 139.178.68.195 port 52262 Dec 16 13:16:13.220240 sshd-session[5778]: pam_unix(sshd:session): session closed for user core Dec 16 13:16:13.228018 systemd-logind[1962]: Session 22 logged out. Waiting for processes to exit. Dec 16 13:16:13.229617 systemd[1]: sshd@21-172.31.28.249:22-139.178.68.195:52262.service: Deactivated successfully. Dec 16 13:16:13.234825 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 13:16:13.241318 systemd-logind[1962]: Removed session 22. Dec 16 13:16:13.482751 kubelet[3348]: E1216 13:16:13.482313 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cf79d7c7c-wlbjc" podUID="283d7557-65a8-4b3b-9bfa-2489f569eafb" Dec 16 13:16:14.483711 kubelet[3348]: E1216 13:16:14.483653 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dbb4c8d86-x8rs5" podUID="d1b7644f-3acf-411e-a5e8-2f3cc85e178b" Dec 16 13:16:14.484716 kubelet[3348]: E1216 13:16:14.484680 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mtpks" podUID="849d73a2-70ae-4c16-a2df-5353f11e5191" Dec 16 13:16:15.484112 kubelet[3348]: E1216 13:16:15.484065 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dbb4c8d86-dk448" podUID="97ebb483-74aa-4963-b528-353f8ea2fd10" Dec 16 13:16:18.252053 systemd[1]: Started sshd@22-172.31.28.249:22-139.178.68.195:52264.service - OpenSSH per-connection server daemon (139.178.68.195:52264). Dec 16 13:16:18.485823 kubelet[3348]: E1216 13:16:18.485741 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-fc58dbc98-4xrqt" podUID="63986b45-f828-491f-8283-58bdcda10705" Dec 16 13:16:18.488622 sshd[5822]: Accepted publickey for core from 139.178.68.195 port 52264 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:16:18.493146 sshd-session[5822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:16:18.507898 systemd-logind[1962]: New session 23 of user core. Dec 16 13:16:18.509801 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 13:16:18.881622 sshd[5826]: Connection closed by 139.178.68.195 port 52264 Dec 16 13:16:18.880960 sshd-session[5822]: pam_unix(sshd:session): session closed for user core Dec 16 13:16:18.889056 systemd[1]: sshd@22-172.31.28.249:22-139.178.68.195:52264.service: Deactivated successfully. Dec 16 13:16:18.894655 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 13:16:18.896791 systemd-logind[1962]: Session 23 logged out. Waiting for processes to exit. Dec 16 13:16:18.900922 systemd-logind[1962]: Removed session 23. Dec 16 13:16:23.484851 kubelet[3348]: E1216 13:16:23.484774 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cncts" podUID="bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6" Dec 16 13:16:23.917879 systemd[1]: Started sshd@23-172.31.28.249:22-139.178.68.195:55616.service - OpenSSH per-connection server daemon (139.178.68.195:55616). Dec 16 13:16:24.102325 sshd[5840]: Accepted publickey for core from 139.178.68.195 port 55616 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:16:24.107142 sshd-session[5840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:16:24.115242 systemd-logind[1962]: New session 24 of user core. Dec 16 13:16:24.120760 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 13:16:24.344253 sshd[5843]: Connection closed by 139.178.68.195 port 55616 Dec 16 13:16:24.345863 sshd-session[5840]: pam_unix(sshd:session): session closed for user core Dec 16 13:16:24.352062 systemd-logind[1962]: Session 24 logged out. Waiting for processes to exit. Dec 16 13:16:24.353415 systemd[1]: sshd@23-172.31.28.249:22-139.178.68.195:55616.service: Deactivated successfully. Dec 16 13:16:24.357126 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 13:16:24.362512 systemd-logind[1962]: Removed session 24. Dec 16 13:16:26.487763 kubelet[3348]: E1216 13:16:26.487416 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cf79d7c7c-wlbjc" podUID="283d7557-65a8-4b3b-9bfa-2489f569eafb" Dec 16 13:16:27.484442 kubelet[3348]: E1216 13:16:27.484297 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mtpks" podUID="849d73a2-70ae-4c16-a2df-5353f11e5191" Dec 16 13:16:28.482363 kubelet[3348]: E1216 13:16:28.482294 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dbb4c8d86-x8rs5" podUID="d1b7644f-3acf-411e-a5e8-2f3cc85e178b" Dec 16 13:16:29.385299 systemd[1]: Started sshd@24-172.31.28.249:22-139.178.68.195:55630.service - OpenSSH per-connection server daemon (139.178.68.195:55630). Dec 16 13:16:29.487453 kubelet[3348]: E1216 13:16:29.487391 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-fc58dbc98-4xrqt" podUID="63986b45-f828-491f-8283-58bdcda10705" Dec 16 13:16:29.491812 kubelet[3348]: E1216 13:16:29.491703 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dbb4c8d86-dk448" podUID="97ebb483-74aa-4963-b528-353f8ea2fd10" Dec 16 13:16:29.592695 sshd[5856]: Accepted publickey for core from 139.178.68.195 port 55630 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:16:29.593459 sshd-session[5856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:16:29.602833 systemd-logind[1962]: New session 25 of user core. Dec 16 13:16:29.609793 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 13:16:29.858600 sshd[5859]: Connection closed by 139.178.68.195 port 55630 Dec 16 13:16:29.859237 sshd-session[5856]: pam_unix(sshd:session): session closed for user core Dec 16 13:16:29.865667 systemd-logind[1962]: Session 25 logged out. Waiting for processes to exit. Dec 16 13:16:29.866534 systemd[1]: sshd@24-172.31.28.249:22-139.178.68.195:55630.service: Deactivated successfully. Dec 16 13:16:29.870787 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 13:16:29.875539 systemd-logind[1962]: Removed session 25. Dec 16 13:16:34.895221 systemd[1]: Started sshd@25-172.31.28.249:22-139.178.68.195:42710.service - OpenSSH per-connection server daemon (139.178.68.195:42710). Dec 16 13:16:35.086687 sshd[5873]: Accepted publickey for core from 139.178.68.195 port 42710 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:16:35.089691 sshd-session[5873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:16:35.097102 systemd-logind[1962]: New session 26 of user core. Dec 16 13:16:35.105808 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 13:16:35.341433 sshd[5876]: Connection closed by 139.178.68.195 port 42710 Dec 16 13:16:35.343554 sshd-session[5873]: pam_unix(sshd:session): session closed for user core Dec 16 13:16:35.348897 systemd[1]: sshd@25-172.31.28.249:22-139.178.68.195:42710.service: Deactivated successfully. Dec 16 13:16:35.352167 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 13:16:35.354619 systemd-logind[1962]: Session 26 logged out. Waiting for processes to exit. Dec 16 13:16:35.356506 systemd-logind[1962]: Removed session 26. Dec 16 13:16:37.487140 kubelet[3348]: E1216 13:16:37.487029 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cncts" podUID="bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6" Dec 16 13:16:40.375424 systemd[1]: Started sshd@26-172.31.28.249:22-139.178.68.195:59926.service - OpenSSH per-connection server daemon (139.178.68.195:59926). Dec 16 13:16:40.481209 kubelet[3348]: E1216 13:16:40.481138 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mtpks" podUID="849d73a2-70ae-4c16-a2df-5353f11e5191" Dec 16 13:16:40.552994 sshd[5896]: Accepted publickey for core from 139.178.68.195 port 59926 ssh2: RSA SHA256:KgRmoHVEyOWjzfUhaFRQ+ZRIq2mz7oz/8HCidtOBkAM Dec 16 13:16:40.555495 sshd-session[5896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:16:40.563674 systemd-logind[1962]: New session 27 of user core. Dec 16 13:16:40.572388 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 13:16:40.859884 sshd[5899]: Connection closed by 139.178.68.195 port 59926 Dec 16 13:16:40.860789 sshd-session[5896]: pam_unix(sshd:session): session closed for user core Dec 16 13:16:40.869971 systemd[1]: sshd@26-172.31.28.249:22-139.178.68.195:59926.service: Deactivated successfully. Dec 16 13:16:40.871581 systemd-logind[1962]: Session 27 logged out. Waiting for processes to exit. Dec 16 13:16:40.876495 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 13:16:40.883199 systemd-logind[1962]: Removed session 27. Dec 16 13:16:41.487439 kubelet[3348]: E1216 13:16:41.487378 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cf79d7c7c-wlbjc" podUID="283d7557-65a8-4b3b-9bfa-2489f569eafb" Dec 16 13:16:41.494601 kubelet[3348]: E1216 13:16:41.494528 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-fc58dbc98-4xrqt" podUID="63986b45-f828-491f-8283-58bdcda10705" Dec 16 13:16:42.483491 containerd[1981]: time="2025-12-16T13:16:42.483438385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:16:42.734769 containerd[1981]: time="2025-12-16T13:16:42.734351680Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:16:42.736736 containerd[1981]: time="2025-12-16T13:16:42.736646122Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:16:42.737056 containerd[1981]: time="2025-12-16T13:16:42.736693126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:16:42.737285 kubelet[3348]: E1216 13:16:42.737197 3348 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:16:42.737285 kubelet[3348]: E1216 13:16:42.737274 3348 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:16:42.737858 kubelet[3348]: E1216 13:16:42.737833 3348 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5dbb4c8d86-x8rs5_calico-apiserver(d1b7644f-3acf-411e-a5e8-2f3cc85e178b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:16:42.737910 kubelet[3348]: E1216 13:16:42.737888 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dbb4c8d86-x8rs5" podUID="d1b7644f-3acf-411e-a5e8-2f3cc85e178b" Dec 16 13:16:45.482033 containerd[1981]: time="2025-12-16T13:16:45.481655468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 13:16:45.760888 containerd[1981]: time="2025-12-16T13:16:45.760515204Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:16:45.762759 containerd[1981]: time="2025-12-16T13:16:45.762648643Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 13:16:45.762759 containerd[1981]: time="2025-12-16T13:16:45.762697521Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 13:16:45.763160 kubelet[3348]: E1216 13:16:45.763102 3348 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:16:45.763160 kubelet[3348]: E1216 13:16:45.763152 3348 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 13:16:45.763592 kubelet[3348]: E1216 13:16:45.763236 3348 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5dbb4c8d86-dk448_calico-apiserver(97ebb483-74aa-4963-b528-353f8ea2fd10): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 13:16:45.763592 kubelet[3348]: E1216 13:16:45.763269 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dbb4c8d86-dk448" podUID="97ebb483-74aa-4963-b528-353f8ea2fd10" Dec 16 13:16:51.481929 containerd[1981]: time="2025-12-16T13:16:51.481875189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 13:16:51.785865 containerd[1981]: time="2025-12-16T13:16:51.785725879Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:16:51.788364 containerd[1981]: time="2025-12-16T13:16:51.788313824Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 13:16:51.788552 containerd[1981]: time="2025-12-16T13:16:51.788334926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 13:16:51.788734 kubelet[3348]: E1216 13:16:51.788689 3348 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:16:51.789264 kubelet[3348]: E1216 13:16:51.788740 3348 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 13:16:51.789264 kubelet[3348]: E1216 13:16:51.788811 3348 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-cncts_calico-system(bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 13:16:51.789882 containerd[1981]: time="2025-12-16T13:16:51.789840191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 13:16:52.039111 containerd[1981]: time="2025-12-16T13:16:52.038969640Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:16:52.041080 containerd[1981]: time="2025-12-16T13:16:52.041026989Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 13:16:52.041193 containerd[1981]: time="2025-12-16T13:16:52.041054370Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 13:16:52.041341 kubelet[3348]: E1216 13:16:52.041293 3348 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:16:52.041391 kubelet[3348]: E1216 13:16:52.041338 3348 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 13:16:52.041447 kubelet[3348]: E1216 13:16:52.041403 3348 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-cncts_calico-system(bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 13:16:52.041529 kubelet[3348]: E1216 13:16:52.041442 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cncts" podUID="bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6" Dec 16 13:16:54.480728 containerd[1981]: time="2025-12-16T13:16:54.480682091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 13:16:54.769701 containerd[1981]: time="2025-12-16T13:16:54.769548495Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:16:54.772041 containerd[1981]: time="2025-12-16T13:16:54.771963789Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 13:16:54.772041 containerd[1981]: time="2025-12-16T13:16:54.772005413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 13:16:54.772240 kubelet[3348]: E1216 13:16:54.772189 3348 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:16:54.772240 kubelet[3348]: E1216 13:16:54.772235 3348 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 13:16:54.772906 kubelet[3348]: E1216 13:16:54.772299 3348 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-fc58dbc98-4xrqt_calico-system(63986b45-f828-491f-8283-58bdcda10705): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 13:16:54.773082 containerd[1981]: time="2025-12-16T13:16:54.773056513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 13:16:55.062727 containerd[1981]: time="2025-12-16T13:16:55.062581578Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:16:55.064883 containerd[1981]: time="2025-12-16T13:16:55.064790795Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 13:16:55.065440 containerd[1981]: time="2025-12-16T13:16:55.064869627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 13:16:55.065528 kubelet[3348]: E1216 13:16:55.065213 3348 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:16:55.065528 kubelet[3348]: E1216 13:16:55.065258 3348 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 13:16:55.065528 kubelet[3348]: E1216 13:16:55.065342 3348 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-fc58dbc98-4xrqt_calico-system(63986b45-f828-491f-8283-58bdcda10705): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 13:16:55.065671 kubelet[3348]: E1216 13:16:55.065380 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-fc58dbc98-4xrqt" podUID="63986b45-f828-491f-8283-58bdcda10705" Dec 16 13:16:55.482585 containerd[1981]: time="2025-12-16T13:16:55.482533732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 13:16:55.766033 containerd[1981]: time="2025-12-16T13:16:55.765910869Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:16:55.768108 containerd[1981]: time="2025-12-16T13:16:55.768045848Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 13:16:55.768232 containerd[1981]: time="2025-12-16T13:16:55.768129934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 13:16:55.768346 kubelet[3348]: E1216 13:16:55.768292 3348 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:16:55.768346 kubelet[3348]: E1216 13:16:55.768338 3348 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 13:16:55.768485 kubelet[3348]: E1216 13:16:55.768404 3348 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-mtpks_calico-system(849d73a2-70ae-4c16-a2df-5353f11e5191): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 13:16:55.768485 kubelet[3348]: E1216 13:16:55.768437 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mtpks" podUID="849d73a2-70ae-4c16-a2df-5353f11e5191" Dec 16 13:16:55.916221 systemd[1]: cri-containerd-6076dd2dcef5ecac86bff25681924e337d9a6696d5e4c7360c6b8bad780d63eb.scope: Deactivated successfully. Dec 16 13:16:55.916521 systemd[1]: cri-containerd-6076dd2dcef5ecac86bff25681924e337d9a6696d5e4c7360c6b8bad780d63eb.scope: Consumed 3.812s CPU time, 81.6M memory peak, 49M read from disk. Dec 16 13:16:55.971036 containerd[1981]: time="2025-12-16T13:16:55.970977536Z" level=info msg="received container exit event container_id:\"6076dd2dcef5ecac86bff25681924e337d9a6696d5e4c7360c6b8bad780d63eb\" id:\"6076dd2dcef5ecac86bff25681924e337d9a6696d5e4c7360c6b8bad780d63eb\" pid:3180 exit_status:1 exited_at:{seconds:1765891015 nanos:923950765}" Dec 16 13:16:56.055192 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6076dd2dcef5ecac86bff25681924e337d9a6696d5e4c7360c6b8bad780d63eb-rootfs.mount: Deactivated successfully. Dec 16 13:16:56.482538 containerd[1981]: time="2025-12-16T13:16:56.482460294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 13:16:56.540180 systemd[1]: cri-containerd-a71f66a55c7f5117e701900cdd2e9356901d221ad236c5dd7fea0fbcc4cf34d3.scope: Deactivated successfully. Dec 16 13:16:56.540873 systemd[1]: cri-containerd-a71f66a55c7f5117e701900cdd2e9356901d221ad236c5dd7fea0fbcc4cf34d3.scope: Consumed 14.625s CPU time, 107.5M memory peak, 41.8M read from disk. Dec 16 13:16:56.544459 containerd[1981]: time="2025-12-16T13:16:56.544418400Z" level=info msg="received container exit event container_id:\"a71f66a55c7f5117e701900cdd2e9356901d221ad236c5dd7fea0fbcc4cf34d3\" id:\"a71f66a55c7f5117e701900cdd2e9356901d221ad236c5dd7fea0fbcc4cf34d3\" pid:3857 exit_status:1 exited_at:{seconds:1765891016 nanos:543337059}" Dec 16 13:16:56.575846 kubelet[3348]: I1216 13:16:56.575811 3348 scope.go:117] "RemoveContainer" containerID="6076dd2dcef5ecac86bff25681924e337d9a6696d5e4c7360c6b8bad780d63eb" Dec 16 13:16:56.583998 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a71f66a55c7f5117e701900cdd2e9356901d221ad236c5dd7fea0fbcc4cf34d3-rootfs.mount: Deactivated successfully. Dec 16 13:16:56.622373 containerd[1981]: time="2025-12-16T13:16:56.622305369Z" level=info msg="CreateContainer within sandbox \"07a0b2867767211fe47f89cd4702e253150ca9d6680846d8496e6c90967f6152\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 16 13:16:56.699979 containerd[1981]: time="2025-12-16T13:16:56.699730575Z" level=info msg="Container a2c462f480c273e3a8d74448a7c7582828398c29048b79969394c273d10ed3a2: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:16:56.702156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1282972337.mount: Deactivated successfully. Dec 16 13:16:56.719413 containerd[1981]: time="2025-12-16T13:16:56.719334990Z" level=info msg="CreateContainer within sandbox \"07a0b2867767211fe47f89cd4702e253150ca9d6680846d8496e6c90967f6152\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a2c462f480c273e3a8d74448a7c7582828398c29048b79969394c273d10ed3a2\"" Dec 16 13:16:56.719939 containerd[1981]: time="2025-12-16T13:16:56.719908289Z" level=info msg="StartContainer for \"a2c462f480c273e3a8d74448a7c7582828398c29048b79969394c273d10ed3a2\"" Dec 16 13:16:56.751005 containerd[1981]: time="2025-12-16T13:16:56.750884151Z" level=info msg="connecting to shim a2c462f480c273e3a8d74448a7c7582828398c29048b79969394c273d10ed3a2" address="unix:///run/containerd/s/b2acb580873b84f97e4bc3bbb8f9e22f66d959e13cb1c2056cce3d6c039ffc4a" protocol=ttrpc version=3 Dec 16 13:16:56.771437 containerd[1981]: time="2025-12-16T13:16:56.771260863Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 13:16:56.773582 containerd[1981]: time="2025-12-16T13:16:56.773447168Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 13:16:56.774477 kubelet[3348]: E1216 13:16:56.774087 3348 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:16:56.774477 kubelet[3348]: E1216 13:16:56.774137 3348 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 13:16:56.774477 kubelet[3348]: E1216 13:16:56.774245 3348 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6cf79d7c7c-wlbjc_calico-system(283d7557-65a8-4b3b-9bfa-2489f569eafb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 13:16:56.774477 kubelet[3348]: E1216 13:16:56.774294 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cf79d7c7c-wlbjc" podUID="283d7557-65a8-4b3b-9bfa-2489f569eafb" Dec 16 13:16:56.775216 containerd[1981]: time="2025-12-16T13:16:56.774368106Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 13:16:56.795888 systemd[1]: Started cri-containerd-a2c462f480c273e3a8d74448a7c7582828398c29048b79969394c273d10ed3a2.scope - libcontainer container a2c462f480c273e3a8d74448a7c7582828398c29048b79969394c273d10ed3a2. Dec 16 13:16:56.871837 containerd[1981]: time="2025-12-16T13:16:56.871793474Z" level=info msg="StartContainer for \"a2c462f480c273e3a8d74448a7c7582828398c29048b79969394c273d10ed3a2\" returns successfully" Dec 16 13:16:57.484302 kubelet[3348]: E1216 13:16:57.484257 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dbb4c8d86-x8rs5" podUID="d1b7644f-3acf-411e-a5e8-2f3cc85e178b" Dec 16 13:16:57.583166 kubelet[3348]: I1216 13:16:57.582443 3348 scope.go:117] "RemoveContainer" containerID="a71f66a55c7f5117e701900cdd2e9356901d221ad236c5dd7fea0fbcc4cf34d3" Dec 16 13:16:57.648144 containerd[1981]: time="2025-12-16T13:16:57.647857727Z" level=info msg="CreateContainer within sandbox \"27b55db2a6eaa54457ce0fa7b5159786a52be630a4ec2f0e0796e49081a44d0a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 16 13:16:57.671287 containerd[1981]: time="2025-12-16T13:16:57.669659276Z" level=info msg="Container 9d5bfd7a6e933958d464723f76adde7947ef0389090bbb8a02adb26fc52315c2: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:16:57.687005 containerd[1981]: time="2025-12-16T13:16:57.686930111Z" level=info msg="CreateContainer within sandbox \"27b55db2a6eaa54457ce0fa7b5159786a52be630a4ec2f0e0796e49081a44d0a\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"9d5bfd7a6e933958d464723f76adde7947ef0389090bbb8a02adb26fc52315c2\"" Dec 16 13:16:57.688880 containerd[1981]: time="2025-12-16T13:16:57.688849564Z" level=info msg="StartContainer for \"9d5bfd7a6e933958d464723f76adde7947ef0389090bbb8a02adb26fc52315c2\"" Dec 16 13:16:57.689845 containerd[1981]: time="2025-12-16T13:16:57.689808987Z" level=info msg="connecting to shim 9d5bfd7a6e933958d464723f76adde7947ef0389090bbb8a02adb26fc52315c2" address="unix:///run/containerd/s/a93111817493a2d75262f055121b8b55db0a28c7cbee108c2c03d9bc04279ff4" protocol=ttrpc version=3 Dec 16 13:16:57.725809 systemd[1]: Started cri-containerd-9d5bfd7a6e933958d464723f76adde7947ef0389090bbb8a02adb26fc52315c2.scope - libcontainer container 9d5bfd7a6e933958d464723f76adde7947ef0389090bbb8a02adb26fc52315c2. Dec 16 13:16:57.795293 containerd[1981]: time="2025-12-16T13:16:57.795163774Z" level=info msg="StartContainer for \"9d5bfd7a6e933958d464723f76adde7947ef0389090bbb8a02adb26fc52315c2\" returns successfully" Dec 16 13:17:00.261967 systemd[1]: cri-containerd-ddf8b94922c56d531f0e9b68df7af19e3d7f9ced52b95ccfca91c05d10c9bbf2.scope: Deactivated successfully. Dec 16 13:17:00.263806 systemd[1]: cri-containerd-ddf8b94922c56d531f0e9b68df7af19e3d7f9ced52b95ccfca91c05d10c9bbf2.scope: Consumed 2.379s CPU time, 36.7M memory peak, 28M read from disk. Dec 16 13:17:00.267431 containerd[1981]: time="2025-12-16T13:17:00.264994597Z" level=info msg="received container exit event container_id:\"ddf8b94922c56d531f0e9b68df7af19e3d7f9ced52b95ccfca91c05d10c9bbf2\" id:\"ddf8b94922c56d531f0e9b68df7af19e3d7f9ced52b95ccfca91c05d10c9bbf2\" pid:3193 exit_status:1 exited_at:{seconds:1765891020 nanos:263603365}" Dec 16 13:17:00.296944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ddf8b94922c56d531f0e9b68df7af19e3d7f9ced52b95ccfca91c05d10c9bbf2-rootfs.mount: Deactivated successfully. Dec 16 13:17:00.481143 kubelet[3348]: E1216 13:17:00.481080 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dbb4c8d86-dk448" podUID="97ebb483-74aa-4963-b528-353f8ea2fd10" Dec 16 13:17:00.596762 kubelet[3348]: I1216 13:17:00.596716 3348 scope.go:117] "RemoveContainer" containerID="ddf8b94922c56d531f0e9b68df7af19e3d7f9ced52b95ccfca91c05d10c9bbf2" Dec 16 13:17:00.599281 containerd[1981]: time="2025-12-16T13:17:00.599241371Z" level=info msg="CreateContainer within sandbox \"527deae728416d71bd7012fb06236cca13a2e41976017a9bfd5f96dd11c7530a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 16 13:17:00.621005 containerd[1981]: time="2025-12-16T13:17:00.620285951Z" level=info msg="Container 9cf425fab9d10f8bfb25aa25df789b8463f080c26bda6f366ed5776bcb111ddd: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:17:00.635937 containerd[1981]: time="2025-12-16T13:17:00.635870426Z" level=info msg="CreateContainer within sandbox \"527deae728416d71bd7012fb06236cca13a2e41976017a9bfd5f96dd11c7530a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"9cf425fab9d10f8bfb25aa25df789b8463f080c26bda6f366ed5776bcb111ddd\"" Dec 16 13:17:00.636480 containerd[1981]: time="2025-12-16T13:17:00.636453598Z" level=info msg="StartContainer for \"9cf425fab9d10f8bfb25aa25df789b8463f080c26bda6f366ed5776bcb111ddd\"" Dec 16 13:17:00.637539 containerd[1981]: time="2025-12-16T13:17:00.637516337Z" level=info msg="connecting to shim 9cf425fab9d10f8bfb25aa25df789b8463f080c26bda6f366ed5776bcb111ddd" address="unix:///run/containerd/s/ef74835e8356bac2de5a19f95d72827cae06f2d6abb3d6b0600e09042bce5c33" protocol=ttrpc version=3 Dec 16 13:17:00.662053 systemd[1]: Started cri-containerd-9cf425fab9d10f8bfb25aa25df789b8463f080c26bda6f366ed5776bcb111ddd.scope - libcontainer container 9cf425fab9d10f8bfb25aa25df789b8463f080c26bda6f366ed5776bcb111ddd. Dec 16 13:17:00.725680 containerd[1981]: time="2025-12-16T13:17:00.725627186Z" level=info msg="StartContainer for \"9cf425fab9d10f8bfb25aa25df789b8463f080c26bda6f366ed5776bcb111ddd\" returns successfully" Dec 16 13:17:04.755083 kubelet[3348]: E1216 13:17:04.755007 3348 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.249:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-249?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 13:17:05.481451 kubelet[3348]: E1216 13:17:05.481276 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-cncts" podUID="bb85ac3e-0aa1-45a4-b775-5a01ecf1dcb6" Dec 16 13:17:06.481813 kubelet[3348]: E1216 13:17:06.481736 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-fc58dbc98-4xrqt" podUID="63986b45-f828-491f-8283-58bdcda10705" Dec 16 13:17:07.481436 kubelet[3348]: E1216 13:17:07.481318 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6cf79d7c7c-wlbjc" podUID="283d7557-65a8-4b3b-9bfa-2489f569eafb" Dec 16 13:17:07.481841 kubelet[3348]: E1216 13:17:07.481795 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mtpks" podUID="849d73a2-70ae-4c16-a2df-5353f11e5191" Dec 16 13:17:08.481042 kubelet[3348]: E1216 13:17:08.481000 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dbb4c8d86-x8rs5" podUID="d1b7644f-3acf-411e-a5e8-2f3cc85e178b" Dec 16 13:17:09.373076 systemd[1]: cri-containerd-9d5bfd7a6e933958d464723f76adde7947ef0389090bbb8a02adb26fc52315c2.scope: Deactivated successfully. Dec 16 13:17:09.373520 systemd[1]: cri-containerd-9d5bfd7a6e933958d464723f76adde7947ef0389090bbb8a02adb26fc52315c2.scope: Consumed 294ms CPU time, 72.1M memory peak, 31.4M read from disk. Dec 16 13:17:09.375312 containerd[1981]: time="2025-12-16T13:17:09.375017409Z" level=info msg="received container exit event container_id:\"9d5bfd7a6e933958d464723f76adde7947ef0389090bbb8a02adb26fc52315c2\" id:\"9d5bfd7a6e933958d464723f76adde7947ef0389090bbb8a02adb26fc52315c2\" pid:6004 exit_status:1 exited_at:{seconds:1765891029 nanos:374421195}" Dec 16 13:17:09.401519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d5bfd7a6e933958d464723f76adde7947ef0389090bbb8a02adb26fc52315c2-rootfs.mount: Deactivated successfully. Dec 16 13:17:09.649510 kubelet[3348]: I1216 13:17:09.648383 3348 scope.go:117] "RemoveContainer" containerID="a71f66a55c7f5117e701900cdd2e9356901d221ad236c5dd7fea0fbcc4cf34d3" Dec 16 13:17:09.649510 kubelet[3348]: I1216 13:17:09.648612 3348 scope.go:117] "RemoveContainer" containerID="9d5bfd7a6e933958d464723f76adde7947ef0389090bbb8a02adb26fc52315c2" Dec 16 13:17:09.649510 kubelet[3348]: E1216 13:17:09.648856 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-65cdcdfd6d-t85z9_tigera-operator(c8c4912d-3b5a-4542-a91b-105c563a5599)\"" pod="tigera-operator/tigera-operator-65cdcdfd6d-t85z9" podUID="c8c4912d-3b5a-4542-a91b-105c563a5599" Dec 16 13:17:09.726424 containerd[1981]: time="2025-12-16T13:17:09.726377455Z" level=info msg="RemoveContainer for \"a71f66a55c7f5117e701900cdd2e9356901d221ad236c5dd7fea0fbcc4cf34d3\"" Dec 16 13:17:09.749898 containerd[1981]: time="2025-12-16T13:17:09.749843818Z" level=info msg="RemoveContainer for \"a71f66a55c7f5117e701900cdd2e9356901d221ad236c5dd7fea0fbcc4cf34d3\" returns successfully" Dec 16 13:17:14.755896 kubelet[3348]: E1216 13:17:14.755785 3348 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.249:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-249?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 13:17:15.481221 kubelet[3348]: E1216 13:17:15.481155 3348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5dbb4c8d86-dk448" podUID="97ebb483-74aa-4963-b528-353f8ea2fd10"